Test Report: KVM_Linux_crio 17174

                    
                      7689d73509a567ada6f3653fa0ef2156acc9a338:2023-09-07:30902
                    
                

Test fail (28/290)

Order failed test Duration
25 TestAddons/parallel/Ingress 157.23
36 TestAddons/StoppedEnableDisable 155.22
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 170.98
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestMultiNode/serial/PingHostFrom2Pods 3.14
206 TestMultiNode/serial/RestartKeepsNodes 691.03
208 TestMultiNode/serial/StopMultiNode 142.94
215 TestPreload 292.5
221 TestRunningBinaryUpgrade 152.78
239 TestPause/serial/SecondStartNoReconfiguration 87.83
257 TestStoppedBinaryUpgrade/Upgrade 269.81
266 TestStartStop/group/old-k8s-version/serial/Stop 140.31
270 TestStartStop/group/no-preload/serial/Stop 140.6
272 TestStartStop/group/embed-certs/serial/Stop 139.5
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
280 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.72
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
282 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
285 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.25
288 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.24
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.07
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.13
291 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 466.45
292 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.73
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 396.95
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 164.31
x
+
TestAddons/parallel/Ingress (157.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-503456 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-503456 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-503456 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1d8f5361-915b-49a8-8113-8d0c061764cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1d8f5361-915b-49a8-8113-8d0c061764cc] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.019206551s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-503456 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.867344276s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-503456 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.156
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 addons disable ingress-dns --alsologtostderr -v=1: (1.475246271s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 addons disable ingress --alsologtostderr -v=1: (7.766242346s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-503456 -n addons-503456
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 logs -n 25: (1.200074364s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |                     |
	|         | -p download-only-435150        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |                     |
	|         | -p download-only-435150        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| delete  | -p download-only-435150        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| delete  | -p download-only-435150        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| start   | --download-only -p             | binary-mirror-605415 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |                     |
	|         | binary-mirror-605415           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45217         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-605415        | binary-mirror-605415 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| start   | -p addons-503456               | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:41 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | -p addons-503456               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | addons-503456                  |                      |         |         |                     |                     |
	| addons  | addons-503456 addons           | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | addons-503456                  |                      |         |         |                     |                     |
	| ip      | addons-503456 ip               | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	| addons  | addons-503456 addons disable   | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-503456 addons disable   | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ssh     | addons-503456 ssh curl -s      | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-503456 addons           | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:42 UTC | 06 Sep 23 23:42 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-503456 addons           | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:42 UTC | 06 Sep 23 23:42 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-503456 ip               | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:44 UTC | 06 Sep 23 23:44 UTC |
	| addons  | addons-503456 addons disable   | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:44 UTC | 06 Sep 23 23:44 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-503456 addons disable   | addons-503456        | jenkins | v1.31.2 | 06 Sep 23 23:44 UTC | 06 Sep 23 23:44 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:38:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:38:43.336183   14102 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:38:43.336319   14102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:43.336331   14102 out.go:309] Setting ErrFile to fd 2...
	I0906 23:38:43.336337   14102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:43.336537   14102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0906 23:38:43.337122   14102 out.go:303] Setting JSON to false
	I0906 23:38:43.337978   14102 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1268,"bootTime":1694042256,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:38:43.338037   14102 start.go:138] virtualization: kvm guest
	I0906 23:38:43.340516   14102 out.go:177] * [addons-503456] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:38:43.342080   14102 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:38:43.342106   14102 notify.go:220] Checking for updates...
	I0906 23:38:43.343704   14102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:38:43.345329   14102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:38:43.346917   14102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:38:43.348406   14102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:38:43.350378   14102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:38:43.352005   14102 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:38:43.383913   14102 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 23:38:43.385300   14102 start.go:298] selected driver: kvm2
	I0906 23:38:43.385310   14102 start.go:902] validating driver "kvm2" against <nil>
	I0906 23:38:43.385321   14102 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:38:43.386040   14102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:43.386127   14102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:38:43.400217   14102 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:38:43.400257   14102 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 23:38:43.400451   14102 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 23:38:43.400483   14102 cni.go:84] Creating CNI manager for ""
	I0906 23:38:43.400489   14102 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:38:43.400499   14102 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:38:43.400508   14102 start_flags.go:321] config:
	{Name:addons-503456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-503456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:43.400635   14102 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:43.402592   14102 out.go:177] * Starting control plane node addons-503456 in cluster addons-503456
	I0906 23:38:43.404135   14102 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 23:38:43.404167   14102 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0906 23:38:43.404176   14102 cache.go:57] Caching tarball of preloaded images
	I0906 23:38:43.404289   14102 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 23:38:43.404300   14102 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 23:38:43.404590   14102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/config.json ...
	I0906 23:38:43.404609   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/config.json: {Name:mkec06f4989423c9be0dd1f99de82a1016614cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:38:43.404796   14102 start.go:365] acquiring machines lock for addons-503456: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 23:38:43.404855   14102 start.go:369] acquired machines lock for "addons-503456" in 42.094µs
	I0906 23:38:43.404876   14102 start.go:93] Provisioning new machine with config: &{Name:addons-503456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-503456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:38:43.404938   14102 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 23:38:43.406689   14102 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 23:38:43.406819   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:38:43.406870   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:38:43.420288   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I0906 23:38:43.420644   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:38:43.421199   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:38:43.421222   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:38:43.421536   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:38:43.421717   14102 main.go:141] libmachine: (addons-503456) Calling .GetMachineName
	I0906 23:38:43.421862   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:38:43.421998   14102 start.go:159] libmachine.API.Create for "addons-503456" (driver="kvm2")
	I0906 23:38:43.422035   14102 client.go:168] LocalClient.Create starting
	I0906 23:38:43.422079   14102 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0906 23:38:43.598109   14102 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0906 23:38:43.727679   14102 main.go:141] libmachine: Running pre-create checks...
	I0906 23:38:43.727701   14102 main.go:141] libmachine: (addons-503456) Calling .PreCreateCheck
	I0906 23:38:43.728192   14102 main.go:141] libmachine: (addons-503456) Calling .GetConfigRaw
	I0906 23:38:43.728602   14102 main.go:141] libmachine: Creating machine...
	I0906 23:38:43.728617   14102 main.go:141] libmachine: (addons-503456) Calling .Create
	I0906 23:38:43.728778   14102 main.go:141] libmachine: (addons-503456) Creating KVM machine...
	I0906 23:38:43.729997   14102 main.go:141] libmachine: (addons-503456) DBG | found existing default KVM network
	I0906 23:38:43.730731   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:43.730582   14124 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298c0}
	I0906 23:38:43.736217   14102 main.go:141] libmachine: (addons-503456) DBG | trying to create private KVM network mk-addons-503456 192.168.39.0/24...
	I0906 23:38:43.803925   14102 main.go:141] libmachine: (addons-503456) DBG | private KVM network mk-addons-503456 192.168.39.0/24 created
	I0906 23:38:43.803960   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:43.803892   14124 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:38:43.803988   14102 main.go:141] libmachine: (addons-503456) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456 ...
	I0906 23:38:43.804029   14102 main.go:141] libmachine: (addons-503456) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0906 23:38:43.804097   14102 main.go:141] libmachine: (addons-503456) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0906 23:38:44.031288   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:44.031167   14124 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa...
	I0906 23:38:44.226069   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:44.225924   14124 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/addons-503456.rawdisk...
	I0906 23:38:44.226105   14102 main.go:141] libmachine: (addons-503456) DBG | Writing magic tar header
	I0906 23:38:44.226116   14102 main.go:141] libmachine: (addons-503456) DBG | Writing SSH key tar header
	I0906 23:38:44.226127   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:44.226041   14124 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456 ...
	I0906 23:38:44.226177   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456
	I0906 23:38:44.226231   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0906 23:38:44.226245   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:38:44.226256   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0906 23:38:44.226272   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456 (perms=drwx------)
	I0906 23:38:44.226282   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 23:38:44.226308   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home/jenkins
	I0906 23:38:44.226322   14102 main.go:141] libmachine: (addons-503456) DBG | Checking permissions on dir: /home
	I0906 23:38:44.226331   14102 main.go:141] libmachine: (addons-503456) DBG | Skipping /home - not owner
	I0906 23:38:44.226340   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0906 23:38:44.226354   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0906 23:38:44.226363   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0906 23:38:44.226376   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 23:38:44.226388   14102 main.go:141] libmachine: (addons-503456) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 23:38:44.226397   14102 main.go:141] libmachine: (addons-503456) Creating domain...
	I0906 23:38:44.227400   14102 main.go:141] libmachine: (addons-503456) define libvirt domain using xml: 
	I0906 23:38:44.227427   14102 main.go:141] libmachine: (addons-503456) <domain type='kvm'>
	I0906 23:38:44.227436   14102 main.go:141] libmachine: (addons-503456)   <name>addons-503456</name>
	I0906 23:38:44.227447   14102 main.go:141] libmachine: (addons-503456)   <memory unit='MiB'>4000</memory>
	I0906 23:38:44.227476   14102 main.go:141] libmachine: (addons-503456)   <vcpu>2</vcpu>
	I0906 23:38:44.227499   14102 main.go:141] libmachine: (addons-503456)   <features>
	I0906 23:38:44.227511   14102 main.go:141] libmachine: (addons-503456)     <acpi/>
	I0906 23:38:44.227527   14102 main.go:141] libmachine: (addons-503456)     <apic/>
	I0906 23:38:44.227541   14102 main.go:141] libmachine: (addons-503456)     <pae/>
	I0906 23:38:44.227549   14102 main.go:141] libmachine: (addons-503456)     
	I0906 23:38:44.227563   14102 main.go:141] libmachine: (addons-503456)   </features>
	I0906 23:38:44.227575   14102 main.go:141] libmachine: (addons-503456)   <cpu mode='host-passthrough'>
	I0906 23:38:44.227586   14102 main.go:141] libmachine: (addons-503456)   
	I0906 23:38:44.227602   14102 main.go:141] libmachine: (addons-503456)   </cpu>
	I0906 23:38:44.227613   14102 main.go:141] libmachine: (addons-503456)   <os>
	I0906 23:38:44.227627   14102 main.go:141] libmachine: (addons-503456)     <type>hvm</type>
	I0906 23:38:44.227643   14102 main.go:141] libmachine: (addons-503456)     <boot dev='cdrom'/>
	I0906 23:38:44.227657   14102 main.go:141] libmachine: (addons-503456)     <boot dev='hd'/>
	I0906 23:38:44.227669   14102 main.go:141] libmachine: (addons-503456)     <bootmenu enable='no'/>
	I0906 23:38:44.227685   14102 main.go:141] libmachine: (addons-503456)   </os>
	I0906 23:38:44.227698   14102 main.go:141] libmachine: (addons-503456)   <devices>
	I0906 23:38:44.227729   14102 main.go:141] libmachine: (addons-503456)     <disk type='file' device='cdrom'>
	I0906 23:38:44.227758   14102 main.go:141] libmachine: (addons-503456)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/boot2docker.iso'/>
	I0906 23:38:44.227775   14102 main.go:141] libmachine: (addons-503456)       <target dev='hdc' bus='scsi'/>
	I0906 23:38:44.227788   14102 main.go:141] libmachine: (addons-503456)       <readonly/>
	I0906 23:38:44.227802   14102 main.go:141] libmachine: (addons-503456)     </disk>
	I0906 23:38:44.227816   14102 main.go:141] libmachine: (addons-503456)     <disk type='file' device='disk'>
	I0906 23:38:44.227833   14102 main.go:141] libmachine: (addons-503456)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 23:38:44.227860   14102 main.go:141] libmachine: (addons-503456)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/addons-503456.rawdisk'/>
	I0906 23:38:44.227876   14102 main.go:141] libmachine: (addons-503456)       <target dev='hda' bus='virtio'/>
	I0906 23:38:44.227888   14102 main.go:141] libmachine: (addons-503456)     </disk>
	I0906 23:38:44.227902   14102 main.go:141] libmachine: (addons-503456)     <interface type='network'>
	I0906 23:38:44.227915   14102 main.go:141] libmachine: (addons-503456)       <source network='mk-addons-503456'/>
	I0906 23:38:44.227938   14102 main.go:141] libmachine: (addons-503456)       <model type='virtio'/>
	I0906 23:38:44.227961   14102 main.go:141] libmachine: (addons-503456)     </interface>
	I0906 23:38:44.227979   14102 main.go:141] libmachine: (addons-503456)     <interface type='network'>
	I0906 23:38:44.227995   14102 main.go:141] libmachine: (addons-503456)       <source network='default'/>
	I0906 23:38:44.228009   14102 main.go:141] libmachine: (addons-503456)       <model type='virtio'/>
	I0906 23:38:44.228018   14102 main.go:141] libmachine: (addons-503456)     </interface>
	I0906 23:38:44.228029   14102 main.go:141] libmachine: (addons-503456)     <serial type='pty'>
	I0906 23:38:44.228040   14102 main.go:141] libmachine: (addons-503456)       <target port='0'/>
	I0906 23:38:44.228053   14102 main.go:141] libmachine: (addons-503456)     </serial>
	I0906 23:38:44.228065   14102 main.go:141] libmachine: (addons-503456)     <console type='pty'>
	I0906 23:38:44.228083   14102 main.go:141] libmachine: (addons-503456)       <target type='serial' port='0'/>
	I0906 23:38:44.228101   14102 main.go:141] libmachine: (addons-503456)     </console>
	I0906 23:38:44.228113   14102 main.go:141] libmachine: (addons-503456)     <rng model='virtio'>
	I0906 23:38:44.228127   14102 main.go:141] libmachine: (addons-503456)       <backend model='random'>/dev/random</backend>
	I0906 23:38:44.228140   14102 main.go:141] libmachine: (addons-503456)     </rng>
	I0906 23:38:44.228152   14102 main.go:141] libmachine: (addons-503456)     
	I0906 23:38:44.228165   14102 main.go:141] libmachine: (addons-503456)     
	I0906 23:38:44.228181   14102 main.go:141] libmachine: (addons-503456)   </devices>
	I0906 23:38:44.228194   14102 main.go:141] libmachine: (addons-503456) </domain>
	I0906 23:38:44.228203   14102 main.go:141] libmachine: (addons-503456) 
	I0906 23:38:44.234303   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:a1:5a:43 in network default
	I0906 23:38:44.234845   14102 main.go:141] libmachine: (addons-503456) Ensuring networks are active...
	I0906 23:38:44.234867   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:44.235486   14102 main.go:141] libmachine: (addons-503456) Ensuring network default is active
	I0906 23:38:44.235850   14102 main.go:141] libmachine: (addons-503456) Ensuring network mk-addons-503456 is active
	I0906 23:38:44.237783   14102 main.go:141] libmachine: (addons-503456) Getting domain xml...
	I0906 23:38:44.238526   14102 main.go:141] libmachine: (addons-503456) Creating domain...
	I0906 23:38:45.651637   14102 main.go:141] libmachine: (addons-503456) Waiting to get IP...
	I0906 23:38:45.652327   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:45.652715   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:45.652779   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:45.652729   14124 retry.go:31] will retry after 310.218015ms: waiting for machine to come up
	I0906 23:38:45.964177   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:45.964681   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:45.964731   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:45.964639   14124 retry.go:31] will retry after 356.90999ms: waiting for machine to come up
	I0906 23:38:46.323328   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:46.323720   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:46.323750   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:46.323658   14124 retry.go:31] will retry after 456.28229ms: waiting for machine to come up
	I0906 23:38:46.781104   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:46.781530   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:46.781561   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:46.781504   14124 retry.go:31] will retry after 541.855308ms: waiting for machine to come up
	I0906 23:38:47.325245   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:47.325671   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:47.325703   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:47.325626   14124 retry.go:31] will retry after 722.687051ms: waiting for machine to come up
	I0906 23:38:48.050069   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:48.050557   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:48.050598   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:48.050521   14124 retry.go:31] will retry after 707.427728ms: waiting for machine to come up
	I0906 23:38:48.760262   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:48.760824   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:48.760849   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:48.760782   14124 retry.go:31] will retry after 1.155784663s: waiting for machine to come up
	I0906 23:38:49.917964   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:49.918453   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:49.918476   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:49.918402   14124 retry.go:31] will retry after 1.37391949s: waiting for machine to come up
	I0906 23:38:51.294159   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:51.294628   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:51.294652   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:51.294568   14124 retry.go:31] will retry after 1.849249323s: waiting for machine to come up
	I0906 23:38:53.146703   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:53.147225   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:53.147253   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:53.147156   14124 retry.go:31] will retry after 2.106474627s: waiting for machine to come up
	I0906 23:38:55.256470   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:55.256766   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:55.256793   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:55.256728   14124 retry.go:31] will retry after 2.632239008s: waiting for machine to come up
	I0906 23:38:57.891824   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:38:57.892237   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:38:57.892255   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:38:57.892193   14124 retry.go:31] will retry after 2.702793933s: waiting for machine to come up
	I0906 23:39:00.596298   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:00.596750   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:39:00.596773   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:39:00.596697   14124 retry.go:31] will retry after 3.082462964s: waiting for machine to come up
	I0906 23:39:03.680644   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:03.681051   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find current IP address of domain addons-503456 in network mk-addons-503456
	I0906 23:39:03.681082   14102 main.go:141] libmachine: (addons-503456) DBG | I0906 23:39:03.680988   14124 retry.go:31] will retry after 4.720404374s: waiting for machine to come up
	I0906 23:39:08.406985   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:08.407466   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has current primary IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:08.407493   14102 main.go:141] libmachine: (addons-503456) Found IP for machine: 192.168.39.156
	I0906 23:39:08.407508   14102 main.go:141] libmachine: (addons-503456) Reserving static IP address...
	I0906 23:39:08.407800   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find host DHCP lease matching {name: "addons-503456", mac: "52:54:00:47:cb:ab", ip: "192.168.39.156"} in network mk-addons-503456
	I0906 23:39:08.477704   14102 main.go:141] libmachine: (addons-503456) Reserved static IP address: 192.168.39.156
	I0906 23:39:08.477734   14102 main.go:141] libmachine: (addons-503456) Waiting for SSH to be available...
	I0906 23:39:08.477744   14102 main.go:141] libmachine: (addons-503456) DBG | Getting to WaitForSSH function...
	I0906 23:39:08.480131   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:08.480405   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456
	I0906 23:39:08.480431   14102 main.go:141] libmachine: (addons-503456) DBG | unable to find defined IP address of network mk-addons-503456 interface with MAC address 52:54:00:47:cb:ab
	I0906 23:39:08.480603   14102 main.go:141] libmachine: (addons-503456) DBG | Using SSH client type: external
	I0906 23:39:08.480617   14102 main.go:141] libmachine: (addons-503456) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa (-rw-------)
	I0906 23:39:08.480664   14102 main.go:141] libmachine: (addons-503456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:39:08.480681   14102 main.go:141] libmachine: (addons-503456) DBG | About to run SSH command:
	I0906 23:39:08.480721   14102 main.go:141] libmachine: (addons-503456) DBG | exit 0
	I0906 23:39:08.492238   14102 main.go:141] libmachine: (addons-503456) DBG | SSH cmd err, output: exit status 255: 
	I0906 23:39:08.492263   14102 main.go:141] libmachine: (addons-503456) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 23:39:08.492278   14102 main.go:141] libmachine: (addons-503456) DBG | command : exit 0
	I0906 23:39:08.492287   14102 main.go:141] libmachine: (addons-503456) DBG | err     : exit status 255
	I0906 23:39:08.492295   14102 main.go:141] libmachine: (addons-503456) DBG | output  : 
	I0906 23:39:11.492524   14102 main.go:141] libmachine: (addons-503456) DBG | Getting to WaitForSSH function...
	I0906 23:39:11.495206   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.495614   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:11.495642   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.495743   14102 main.go:141] libmachine: (addons-503456) DBG | Using SSH client type: external
	I0906 23:39:11.495770   14102 main.go:141] libmachine: (addons-503456) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa (-rw-------)
	I0906 23:39:11.495817   14102 main.go:141] libmachine: (addons-503456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:39:11.495844   14102 main.go:141] libmachine: (addons-503456) DBG | About to run SSH command:
	I0906 23:39:11.495857   14102 main.go:141] libmachine: (addons-503456) DBG | exit 0
	I0906 23:39:11.590807   14102 main.go:141] libmachine: (addons-503456) DBG | SSH cmd err, output: <nil>: 
	I0906 23:39:11.591050   14102 main.go:141] libmachine: (addons-503456) KVM machine creation complete!
	I0906 23:39:11.591438   14102 main.go:141] libmachine: (addons-503456) Calling .GetConfigRaw
	I0906 23:39:11.592102   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:11.592318   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:11.592513   14102 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 23:39:11.592531   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:11.593817   14102 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 23:39:11.593830   14102 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 23:39:11.593836   14102 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 23:39:11.593842   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:11.595886   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.596321   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:11.596353   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.596491   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:11.596703   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.596858   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.597011   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:11.597154   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:11.597543   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:11.597560   14102 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 23:39:11.726152   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:39:11.726176   14102 main.go:141] libmachine: Detecting the provisioner...
	I0906 23:39:11.726185   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:11.728797   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.729061   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:11.729091   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.729220   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:11.729412   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.729606   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.729802   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:11.729972   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:11.730350   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:11.730362   14102 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 23:39:11.859908   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0906 23:39:11.859997   14102 main.go:141] libmachine: found compatible host: buildroot
	I0906 23:39:11.860013   14102 main.go:141] libmachine: Provisioning with buildroot...
	I0906 23:39:11.860027   14102 main.go:141] libmachine: (addons-503456) Calling .GetMachineName
	I0906 23:39:11.860270   14102 buildroot.go:166] provisioning hostname "addons-503456"
	I0906 23:39:11.860292   14102 main.go:141] libmachine: (addons-503456) Calling .GetMachineName
	I0906 23:39:11.860487   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:11.863040   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.863420   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:11.863451   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:11.863581   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:11.863755   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.863879   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:11.864011   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:11.864202   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:11.864610   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:11.864625   14102 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-503456 && echo "addons-503456" | sudo tee /etc/hostname
	I0906 23:39:12.003371   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-503456
	
	I0906 23:39:12.003404   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.006117   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.006470   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.006498   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.006713   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:12.006907   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.007026   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.007138   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:12.007320   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:12.007725   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:12.007743   14102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-503456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-503456/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-503456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 23:39:12.146862   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:39:12.146898   14102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0906 23:39:12.146947   14102 buildroot.go:174] setting up certificates
	I0906 23:39:12.146957   14102 provision.go:83] configureAuth start
	I0906 23:39:12.146990   14102 main.go:141] libmachine: (addons-503456) Calling .GetMachineName
	I0906 23:39:12.147247   14102 main.go:141] libmachine: (addons-503456) Calling .GetIP
	I0906 23:39:12.149596   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.149897   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.149923   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.150074   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.152177   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.152548   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.152585   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.152726   14102 provision.go:138] copyHostCerts
	I0906 23:39:12.152798   14102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0906 23:39:12.152898   14102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0906 23:39:12.152959   14102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0906 23:39:12.153001   14102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.addons-503456 san=[192.168.39.156 192.168.39.156 localhost 127.0.0.1 minikube addons-503456]
	I0906 23:39:12.279088   14102 provision.go:172] copyRemoteCerts
	I0906 23:39:12.279136   14102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 23:39:12.279157   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.281572   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.281891   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.281924   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.282104   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:12.282302   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.282488   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:12.282677   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:12.380754   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 23:39:12.402591   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 23:39:12.423967   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 23:39:12.445077   14102 provision.go:86] duration metric: configureAuth took 298.103781ms
	I0906 23:39:12.445107   14102 buildroot.go:189] setting minikube options for container-runtime
	I0906 23:39:12.445338   14102 config.go:182] Loaded profile config "addons-503456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 23:39:12.445406   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.447788   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.448111   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.448141   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.448269   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:12.448456   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.448640   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.448847   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:12.449002   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:12.449390   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:12.449411   14102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 23:39:12.755201   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 23:39:12.755222   14102 main.go:141] libmachine: Checking connection to Docker...
	I0906 23:39:12.755233   14102 main.go:141] libmachine: (addons-503456) Calling .GetURL
	I0906 23:39:12.756529   14102 main.go:141] libmachine: (addons-503456) DBG | Using libvirt version 6000000
	I0906 23:39:12.758554   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.758912   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.758944   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.759059   14102 main.go:141] libmachine: Docker is up and running!
	I0906 23:39:12.759076   14102 main.go:141] libmachine: Reticulating splines...
	I0906 23:39:12.759089   14102 client.go:171] LocalClient.Create took 29.337042437s
	I0906 23:39:12.759112   14102 start.go:167] duration metric: libmachine.API.Create for "addons-503456" took 29.337114158s
	I0906 23:39:12.759121   14102 start.go:300] post-start starting for "addons-503456" (driver="kvm2")
	I0906 23:39:12.759136   14102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 23:39:12.759161   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:12.759364   14102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 23:39:12.759383   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.761225   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.761532   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.761559   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.761681   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:12.761824   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.761976   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:12.762079   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:12.855649   14102 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 23:39:12.859893   14102 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 23:39:12.859914   14102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0906 23:39:12.859983   14102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0906 23:39:12.860013   14102 start.go:303] post-start completed in 100.88647ms
	I0906 23:39:12.860042   14102 main.go:141] libmachine: (addons-503456) Calling .GetConfigRaw
	I0906 23:39:12.860587   14102 main.go:141] libmachine: (addons-503456) Calling .GetIP
	I0906 23:39:12.863286   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.863619   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.863665   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.863878   14102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/config.json ...
	I0906 23:39:12.864082   14102 start.go:128] duration metric: createHost completed in 29.459124391s
	I0906 23:39:12.864118   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.866157   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.866499   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.866537   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.866619   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:12.866826   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.866996   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:12.867148   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:12.867295   14102 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:12.867704   14102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0906 23:39:12.867716   14102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 23:39:12.995439   14102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694043552.972541635
	
	I0906 23:39:12.995458   14102 fix.go:206] guest clock: 1694043552.972541635
	I0906 23:39:12.995468   14102 fix.go:219] Guest: 2023-09-06 23:39:12.972541635 +0000 UTC Remote: 2023-09-06 23:39:12.864104411 +0000 UTC m=+29.559778925 (delta=108.437224ms)
	I0906 23:39:12.995536   14102 fix.go:190] guest clock delta is within tolerance: 108.437224ms
	I0906 23:39:12.995546   14102 start.go:83] releasing machines lock for "addons-503456", held for 29.590678559s
	I0906 23:39:12.995580   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:12.995819   14102 main.go:141] libmachine: (addons-503456) Calling .GetIP
	I0906 23:39:12.998127   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.998376   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:12.998392   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:12.998514   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:12.998948   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:12.999138   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:12.999205   14102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 23:39:12.999256   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:12.999343   14102 ssh_runner.go:195] Run: cat /version.json
	I0906 23:39:12.999359   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:13.001792   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:13.002039   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:13.002118   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:13.002147   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:13.002288   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:13.002370   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:13.002397   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:13.002481   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:13.002620   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:13.002683   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:13.002800   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:13.002860   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:13.003035   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:13.003141   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:13.114903   14102 ssh_runner.go:195] Run: systemctl --version
	I0906 23:39:13.120382   14102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 23:39:13.280144   14102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 23:39:13.286866   14102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 23:39:13.286940   14102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 23:39:13.300507   14102 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 23:39:13.300531   14102 start.go:466] detecting cgroup driver to use...
	I0906 23:39:13.300593   14102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 23:39:13.315365   14102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 23:39:13.328364   14102 docker.go:196] disabling cri-docker service (if available) ...
	I0906 23:39:13.328409   14102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 23:39:13.342260   14102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 23:39:13.355440   14102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 23:39:13.469133   14102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 23:39:13.586375   14102 docker.go:212] disabling docker service ...
	I0906 23:39:13.586445   14102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 23:39:13.599749   14102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 23:39:13.610928   14102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 23:39:13.715745   14102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 23:39:13.822861   14102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 23:39:13.834346   14102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 23:39:13.849561   14102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 23:39:13.849624   14102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:39:13.857895   14102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 23:39:13.857956   14102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:39:13.866417   14102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:39:13.874875   14102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:39:13.883367   14102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 23:39:13.892092   14102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 23:39:13.899580   14102 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 23:39:13.899617   14102 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 23:39:13.911533   14102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 23:39:13.919809   14102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:39:14.015117   14102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 23:39:14.186116   14102 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 23:39:14.186222   14102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 23:39:14.191169   14102 start.go:534] Will wait 60s for crictl version
	I0906 23:39:14.191247   14102 ssh_runner.go:195] Run: which crictl
	I0906 23:39:14.195014   14102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 23:39:14.226388   14102 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0906 23:39:14.226519   14102 ssh_runner.go:195] Run: crio --version
	I0906 23:39:14.269039   14102 ssh_runner.go:195] Run: crio --version
	I0906 23:39:14.323514   14102 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0906 23:39:14.325061   14102 main.go:141] libmachine: (addons-503456) Calling .GetIP
	I0906 23:39:14.327747   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:14.328084   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:14.328108   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:14.328347   14102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 23:39:14.332293   14102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:39:14.344029   14102 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 23:39:14.344083   14102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:39:14.374119   14102 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0906 23:39:14.374178   14102 ssh_runner.go:195] Run: which lz4
	I0906 23:39:14.377994   14102 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 23:39:14.381780   14102 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 23:39:14.381806   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0906 23:39:16.057276   14102 crio.go:444] Took 1.679313 seconds to copy over tarball
	I0906 23:39:16.057354   14102 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 23:39:18.937656   14102 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.880268852s)
	I0906 23:39:18.937683   14102 crio.go:451] Took 2.880379 seconds to extract the tarball
	I0906 23:39:18.937694   14102 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 23:39:18.978607   14102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:39:19.030418   14102 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 23:39:19.030446   14102 cache_images.go:84] Images are preloaded, skipping loading
	I0906 23:39:19.030530   14102 ssh_runner.go:195] Run: crio config
	I0906 23:39:19.093566   14102 cni.go:84] Creating CNI manager for ""
	I0906 23:39:19.093594   14102 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:39:19.093619   14102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 23:39:19.093642   14102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-503456 NodeName:addons-503456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 23:39:19.093820   14102 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-503456"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 23:39:19.093916   14102 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-503456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-503456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 23:39:19.093989   14102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 23:39:19.103427   14102 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 23:39:19.103487   14102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 23:39:19.112186   14102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0906 23:39:19.127211   14102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 23:39:19.141943   14102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0906 23:39:19.156844   14102 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0906 23:39:19.160341   14102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:39:19.171107   14102 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456 for IP: 192.168.39.156
	I0906 23:39:19.171130   14102 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.171260   14102 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0906 23:39:19.290622   14102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt ...
	I0906 23:39:19.290656   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt: {Name:mkfa44d486b1d58d6e45189e242eebe7cd34a2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.290873   14102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key ...
	I0906 23:39:19.290889   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key: {Name:mkf030428c28a6905c759d227005a2f22a5dde7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.290991   14102 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0906 23:39:19.363429   14102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt ...
	I0906 23:39:19.363456   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt: {Name:mke2b33cb93fce460aa4440ff3090ea0ef3a3913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.363631   14102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key ...
	I0906 23:39:19.363645   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key: {Name:mk80c2d4e173c34ad142c3fef624cd7f4a18bd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.363766   14102 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.key
	I0906 23:39:19.363783   14102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt with IP's: []
	I0906 23:39:19.460863   14102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt ...
	I0906 23:39:19.460893   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: {Name:mkd25969ce7c273a617055cd05a22f4ae7edb6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.461061   14102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.key ...
	I0906 23:39:19.461074   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.key: {Name:mkad7ad27287835889f65dd9a91f52b71b00d1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.461165   14102 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key.c061e229
	I0906 23:39:19.461186   14102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt.c061e229 with IP's: [192.168.39.156 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 23:39:19.515088   14102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt.c061e229 ...
	I0906 23:39:19.515115   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt.c061e229: {Name:mk436555155e2ff207a79585a35001281bb45b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.515275   14102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key.c061e229 ...
	I0906 23:39:19.515290   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key.c061e229: {Name:mk14402854a65599d320083c50cf5be7731914f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.515376   14102 certs.go:337] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt.c061e229 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt
	I0906 23:39:19.515455   14102 certs.go:341] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key.c061e229 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key
	I0906 23:39:19.515518   14102 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.key
	I0906 23:39:19.515540   14102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.crt with IP's: []
	I0906 23:39:19.663189   14102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.crt ...
	I0906 23:39:19.663219   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.crt: {Name:mk47ef94cb3c67e09c65bb3150f24c60ed766948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.663393   14102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.key ...
	I0906 23:39:19.663407   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.key: {Name:mkc0aa937b54327f332ec5e6158325925b1c3dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:19.663607   14102 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 23:39:19.663656   14102 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0906 23:39:19.663693   14102 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0906 23:39:19.663729   14102 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0906 23:39:19.664292   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 23:39:19.690631   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 23:39:19.713918   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 23:39:19.736365   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 23:39:19.758837   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 23:39:19.780734   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 23:39:19.802758   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 23:39:19.825473   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 23:39:19.848480   14102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 23:39:19.873772   14102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 23:39:19.890970   14102 ssh_runner.go:195] Run: openssl version
	I0906 23:39:19.896589   14102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 23:39:19.907236   14102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:19.911648   14102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:19.911691   14102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:19.917351   14102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 23:39:19.927596   14102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 23:39:19.931555   14102 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 23:39:19.931611   14102 kubeadm.go:404] StartCluster: {Name:addons-503456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-503456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:39:19.931713   14102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 23:39:19.931786   14102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 23:39:19.960571   14102 cri.go:89] found id: ""
	I0906 23:39:19.960631   14102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 23:39:19.970258   14102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 23:39:19.978924   14102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 23:39:19.988547   14102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 23:39:19.988585   14102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 23:39:20.040198   14102 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 23:39:20.040318   14102 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 23:39:20.168659   14102 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 23:39:20.168845   14102 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 23:39:20.169018   14102 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 23:39:20.329091   14102 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 23:39:20.426538   14102 out.go:204]   - Generating certificates and keys ...
	I0906 23:39:20.426746   14102 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 23:39:20.426862   14102 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 23:39:20.562342   14102 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 23:39:20.740063   14102 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 23:39:21.011913   14102 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 23:39:21.213026   14102 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 23:39:21.303328   14102 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 23:39:21.303681   14102 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-503456 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I0906 23:39:21.504683   14102 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 23:39:21.505181   14102 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-503456 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I0906 23:39:21.729389   14102 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 23:39:21.982283   14102 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 23:39:22.230488   14102 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 23:39:22.230601   14102 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 23:39:22.507721   14102 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 23:39:22.587424   14102 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 23:39:22.768915   14102 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 23:39:22.936425   14102 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 23:39:22.937427   14102 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 23:39:22.939820   14102 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 23:39:22.941761   14102 out.go:204]   - Booting up control plane ...
	I0906 23:39:22.941908   14102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 23:39:22.942021   14102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 23:39:22.942141   14102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 23:39:22.962184   14102 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 23:39:22.962529   14102 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 23:39:22.962621   14102 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 23:39:23.088253   14102 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 23:39:31.586441   14102 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502055 seconds
	I0906 23:39:31.586592   14102 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 23:39:31.607296   14102 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 23:39:32.138443   14102 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 23:39:32.138664   14102 kubeadm.go:322] [mark-control-plane] Marking the node addons-503456 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 23:39:32.652595   14102 kubeadm.go:322] [bootstrap-token] Using token: umj947.dd0bihwglqboxc9r
	I0906 23:39:32.654110   14102 out.go:204]   - Configuring RBAC rules ...
	I0906 23:39:32.654219   14102 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 23:39:32.667742   14102 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 23:39:32.676898   14102 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 23:39:32.694493   14102 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 23:39:32.699595   14102 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 23:39:32.703554   14102 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 23:39:32.725136   14102 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 23:39:32.995748   14102 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 23:39:33.073966   14102 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 23:39:33.073988   14102 kubeadm.go:322] 
	I0906 23:39:33.074060   14102 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 23:39:33.074068   14102 kubeadm.go:322] 
	I0906 23:39:33.074193   14102 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 23:39:33.074216   14102 kubeadm.go:322] 
	I0906 23:39:33.074250   14102 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 23:39:33.074335   14102 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 23:39:33.074418   14102 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 23:39:33.074428   14102 kubeadm.go:322] 
	I0906 23:39:33.074497   14102 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 23:39:33.074523   14102 kubeadm.go:322] 
	I0906 23:39:33.074585   14102 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 23:39:33.074594   14102 kubeadm.go:322] 
	I0906 23:39:33.074658   14102 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 23:39:33.074758   14102 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 23:39:33.074871   14102 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 23:39:33.074886   14102 kubeadm.go:322] 
	I0906 23:39:33.074993   14102 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 23:39:33.075112   14102 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 23:39:33.075128   14102 kubeadm.go:322] 
	I0906 23:39:33.075229   14102 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token umj947.dd0bihwglqboxc9r \
	I0906 23:39:33.075367   14102 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0906 23:39:33.075403   14102 kubeadm.go:322] 	--control-plane 
	I0906 23:39:33.075413   14102 kubeadm.go:322] 
	I0906 23:39:33.075553   14102 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 23:39:33.075572   14102 kubeadm.go:322] 
	I0906 23:39:33.075683   14102 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token umj947.dd0bihwglqboxc9r \
	I0906 23:39:33.075867   14102 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0906 23:39:33.076051   14102 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 23:39:33.076077   14102 cni.go:84] Creating CNI manager for ""
	I0906 23:39:33.076090   14102 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:39:33.077879   14102 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 23:39:33.079226   14102 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 23:39:33.095564   14102 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 23:39:33.121568   14102 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 23:39:33.121692   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=addons-503456 minikube.k8s.io/updated_at=2023_09_06T23_39_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:33.121703   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:33.171879   14102 ops.go:34] apiserver oom_adj: -16
	I0906 23:39:33.435266   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:33.544752   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:34.162389   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:34.662672   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:35.161923   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:35.662735   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:36.162569   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:36.662026   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:37.162150   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:37.661908   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:38.162830   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:38.662838   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:39.162368   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:39.662460   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:40.162588   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:40.661793   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:41.162517   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:41.661952   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:42.161881   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:42.662167   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:43.161842   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:43.662157   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:44.162017   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:44.662733   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:45.162748   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:45.661919   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:46.162271   14102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:46.285847   14102 kubeadm.go:1081] duration metric: took 13.164189865s to wait for elevateKubeSystemPrivileges.
	I0906 23:39:46.285882   14102 kubeadm.go:406] StartCluster complete in 26.354274421s
	I0906 23:39:46.285903   14102 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:46.286070   14102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:39:46.286634   14102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:46.286900   14102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 23:39:46.286977   14102 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0906 23:39:46.287090   14102 addons.go:69] Setting volumesnapshots=true in profile "addons-503456"
	I0906 23:39:46.287099   14102 addons.go:69] Setting cloud-spanner=true in profile "addons-503456"
	I0906 23:39:46.287121   14102 addons.go:231] Setting addon cloud-spanner=true in "addons-503456"
	I0906 23:39:46.287127   14102 addons.go:231] Setting addon volumesnapshots=true in "addons-503456"
	I0906 23:39:46.287129   14102 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-503456"
	I0906 23:39:46.287155   14102 addons.go:69] Setting inspektor-gadget=true in profile "addons-503456"
	I0906 23:39:46.287155   14102 addons.go:69] Setting default-storageclass=true in profile "addons-503456"
	I0906 23:39:46.287180   14102 config.go:182] Loaded profile config "addons-503456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 23:39:46.287088   14102 addons.go:69] Setting gcp-auth=true in profile "addons-503456"
	I0906 23:39:46.287191   14102 addons.go:231] Setting addon inspektor-gadget=true in "addons-503456"
	I0906 23:39:46.287195   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.287198   14102 mustload.go:65] Loading cluster: addons-503456
	I0906 23:39:46.287210   14102 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-503456"
	I0906 23:39:46.287182   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.287219   14102 addons.go:69] Setting registry=true in profile "addons-503456"
	I0906 23:39:46.287231   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.287235   14102 addons.go:231] Setting addon registry=true in "addons-503456"
	I0906 23:39:46.287265   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.287282   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.287415   14102 config.go:182] Loaded profile config "addons-503456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 23:39:46.287685   14102 addons.go:69] Setting helm-tiller=true in profile "addons-503456"
	I0906 23:39:46.287731   14102 addons.go:231] Setting addon helm-tiller=true in "addons-503456"
	I0906 23:39:46.287784   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.288103   14102 addons.go:69] Setting ingress=true in profile "addons-503456"
	I0906 23:39:46.288132   14102 addons.go:231] Setting addon ingress=true in "addons-503456"
	I0906 23:39:46.288189   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.288638   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.288704   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.287206   14102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-503456"
	I0906 23:39:46.288831   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.289252   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.288705   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.289277   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.289497   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.289540   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.289639   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.289656   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.289686   14102 addons.go:69] Setting ingress-dns=true in profile "addons-503456"
	I0906 23:39:46.289700   14102 addons.go:231] Setting addon ingress-dns=true in "addons-503456"
	I0906 23:39:46.288111   14102 addons.go:69] Setting storage-provisioner=true in profile "addons-503456"
	I0906 23:39:46.289717   14102 addons.go:231] Setting addon storage-provisioner=true in "addons-503456"
	I0906 23:39:46.289783   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.289818   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.290028   14102 addons.go:69] Setting metrics-server=true in profile "addons-503456"
	I0906 23:39:46.290046   14102 addons.go:231] Setting addon metrics-server=true in "addons-503456"
	I0906 23:39:46.290090   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.290429   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.290455   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.290580   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.290785   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.291011   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.291393   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.291743   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291814   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291833   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291847   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291885   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291894   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.291704   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.292265   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.309794   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0906 23:39:46.310635   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.311246   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.311263   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.311737   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.311832   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0906 23:39:46.312012   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I0906 23:39:46.314168   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0906 23:39:46.314204   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0906 23:39:46.314257   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.314213   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.314672   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.314874   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.314906   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.314921   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.315404   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.315410   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.315422   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.315428   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.315456   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.315483   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.315874   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.315893   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.316069   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.316115   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.316157   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.316258   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.316328   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.316757   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.316788   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.316825   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.316865   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.319533   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.319573   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.319637   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0906 23:39:46.319974   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.320408   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.320428   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.320774   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.321306   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.321341   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.326086   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
	I0906 23:39:46.326454   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.326907   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.326924   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.327743   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.328263   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.328299   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.335406   14102 addons.go:231] Setting addon default-storageclass=true in "addons-503456"
	I0906 23:39:46.335450   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.335818   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.335837   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.336458   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0906 23:39:46.336940   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.337874   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.337892   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.338209   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.338705   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.338737   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.339459   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0906 23:39:46.340698   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.341005   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0906 23:39:46.341263   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.341286   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.341736   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.341804   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.341942   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.342618   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.342633   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.343134   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.343689   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.343727   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.343871   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.346444   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 23:39:46.347946   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 23:39:46.345936   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0906 23:39:46.347892   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0906 23:39:46.351216   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 23:39:46.350380   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.350375   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.352866   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 23:39:46.353310   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.353463   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.354540   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.354598   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 23:39:46.354674   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41037
	I0906 23:39:46.354704   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.354919   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.356157   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 23:39:46.357926   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 23:39:46.358013   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.358026   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I0906 23:39:46.358532   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.359201   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 23:39:46.360523   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 23:39:46.360540   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 23:39:46.360559   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.359238   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.358984   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.359486   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.359758   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.361485   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.361503   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.362436   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.362639   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.363625   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0906 23:39:46.364110   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.364691   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.364707   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.365081   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.365273   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.365553   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.365871   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.367744   14102 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0906 23:39:46.366306   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.366502   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.367210   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.367234   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.367416   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.370212   14102 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:46.368988   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.369014   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.369147   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.371090   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0906 23:39:46.371706   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.371885   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.372320   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0906 23:39:46.374365   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0906 23:39:46.375463   14102 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:46.374378   14102 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0906 23:39:46.374508   14102 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-503456" context rescaled to 1 replicas
	I0906 23:39:46.374867   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.374382   14102 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0906 23:39:46.375757   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.375761   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.375938   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.375983   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.376628   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0906 23:39:46.378427   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.378450   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.378450   14102 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 23:39:46.378464   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 23:39:46.378481   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.377247   14102 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:39:46.378539   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0906 23:39:46.378551   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.377286   14102 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:39:46.379909   14102 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0906 23:39:46.379925   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0906 23:39:46.381455   14102 out.go:177] * Verifying Kubernetes components...
	I0906 23:39:46.379946   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.378333   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.378587   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.379407   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.379725   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0906 23:39:46.377520   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.381651   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:46.382640   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.382672   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.382755   14102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:39:46.382853   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.383034   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.383081   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.383114   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.383156   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.382401   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.383204   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.383212   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.383227   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.383261   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.383884   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.384017   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.384026   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.384071   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.384107   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.384249   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.384275   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.384388   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.384630   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.384685   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.384737   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.384754   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.384885   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.384901   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.385625   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.385697   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.385741   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.385888   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.386002   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.386109   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.386255   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:46.386286   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:46.386358   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.387878   14102 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 23:39:46.389598   14102 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 23:39:46.389615   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 23:39:46.387930   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.389632   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.387338   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.386685   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.389725   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.389757   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.388742   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.391157   14102 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0906 23:39:46.390083   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.393661   14102 out.go:177]   - Using image docker.io/registry:2.8.1
	I0906 23:39:46.392574   14102 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 23:39:46.392914   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.393455   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.393975   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.395906   14102 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0906 23:39:46.394760   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 23:39:46.394819   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.394954   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.395040   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.396389   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35133
	I0906 23:39:46.397152   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.397291   14102 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 23:39:46.397303   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0906 23:39:46.397321   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.397378   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.397939   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0906 23:39:46.398186   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.398356   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.398820   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.399276   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.399878   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.399906   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.399879   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.399950   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.400438   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.400743   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.400784   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.401091   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.402168   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.402200   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.402225   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.402250   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.402698   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.402703   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.402725   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.402927   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.404436   14102 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0906 23:39:46.403208   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.403242   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.403612   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.405638   14102 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:39:46.405655   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 23:39:46.405677   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.407014   14102 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:39:46.405872   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.405886   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.408502   14102 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:39:46.408516   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 23:39:46.408533   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.409186   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0906 23:39:46.409210   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.409186   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.409534   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.410599   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.411917   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.412390   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I0906 23:39:46.412430   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.412471   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.412704   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.412861   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.412930   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.413327   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.413347   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.413447   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.413467   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.413489   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.413708   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.414394   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0906 23:39:46.414397   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.414426   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.415117   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.415173   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.415248   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:46.415487   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:46.415655   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.415691   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.415829   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.416006   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.416176   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.416308   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.416949   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.417121   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:46.417134   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:46.417240   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.418716   14102 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 23:39:46.417522   14102 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 23:39:46.417703   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:46.420142   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 23:39:46.420162   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.420166   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 23:39:46.420182   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 23:39:46.420198   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:46.420276   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:46.423182   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.423290   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.423578   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.423601   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.423740   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.423775   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:46.423803   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:46.423900   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.423909   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:46.424036   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.424060   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:46.424159   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:46.424154   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.424259   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:46.637436   14102 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 23:39:46.637461   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 23:39:46.654966   14102 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 23:39:46.654986   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 23:39:46.696708   14102 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 23:39:46.696739   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 23:39:46.717710   14102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 23:39:46.718505   14102 node_ready.go:35] waiting up to 6m0s for node "addons-503456" to be "Ready" ...
	I0906 23:39:46.718900   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 23:39:46.725466   14102 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 23:39:46.725486   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 23:39:46.732762   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:39:46.740790   14102 node_ready.go:49] node "addons-503456" has status "Ready":"True"
	I0906 23:39:46.740809   14102 node_ready.go:38] duration metric: took 22.280375ms waiting for node "addons-503456" to be "Ready" ...
	I0906 23:39:46.740817   14102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:39:46.744073   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:39:46.754995   14102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace to be "Ready" ...
	I0906 23:39:46.762594   14102 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 23:39:46.762618   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 23:39:46.771573   14102 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 23:39:46.771592   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 23:39:46.786070   14102 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 23:39:46.786096   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 23:39:46.787287   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:39:46.788835   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 23:39:46.788855   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 23:39:46.790861   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 23:39:46.813220   14102 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 23:39:46.813244   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 23:39:46.910396   14102 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:39:46.910423   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 23:39:46.912097   14102 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 23:39:46.912118   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 23:39:46.965469   14102 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 23:39:46.965499   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 23:39:46.977457   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 23:39:47.027800   14102 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 23:39:47.027824   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 23:39:47.041170   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 23:39:47.041190   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 23:39:47.062529   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:39:47.081498   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 23:39:47.081516   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 23:39:47.200055   14102 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:39:47.200077   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 23:39:47.211535   14102 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 23:39:47.211563   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 23:39:47.255982   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 23:39:47.256002   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 23:39:47.279527   14102 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:47.279545   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 23:39:47.332293   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:39:47.407760   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 23:39:47.407787   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 23:39:47.408805   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:47.410171   14102 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 23:39:47.410185   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 23:39:47.468099   14102 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 23:39:47.468118   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 23:39:47.507632   14102 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 23:39:47.507654   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 23:39:47.542064   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 23:39:47.542090   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 23:39:47.598507   14102 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 23:39:47.598536   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0906 23:39:47.611819   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 23:39:47.611851   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 23:39:47.655561   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 23:39:47.681519   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 23:39:47.681543   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 23:39:47.724929   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 23:39:47.724955   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 23:39:47.771017   14102 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:39:47.771039   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 23:39:47.810021   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:39:49.459164   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:50.289822   14102 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.572068383s)
	I0906 23:39:50.289857   14102 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 23:39:50.289832   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.570902022s)
	I0906 23:39:50.289896   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:50.289915   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:50.290181   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:50.290198   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:50.290208   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:50.290219   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:50.290223   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:50.290471   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:50.290511   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:50.290535   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:50.290552   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:50.290561   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:50.290797   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:50.290812   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:51.643266   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:52.782724   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.049926536s)
	I0906 23:39:52.782770   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.782798   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:52.783051   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:52.783104   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.783118   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.783136   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.783150   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:52.783455   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.783474   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.783501   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:52.888504   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.144394057s)
	I0906 23:39:52.888562   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.888576   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:52.888951   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:52.888985   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.888997   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.889007   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.889021   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:52.889287   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.889336   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.889336   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:53.053713   14102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 23:39:53.053749   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:53.056815   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:53.057252   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:53.057281   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:53.057482   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:53.057678   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:53.057874   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:53.058036   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:53.268908   14102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 23:39:53.342146   14102 addons.go:231] Setting addon gcp-auth=true in "addons-503456"
	I0906 23:39:53.342206   14102 host.go:66] Checking if "addons-503456" exists ...
	I0906 23:39:53.342540   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:53.342586   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:53.358192   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0906 23:39:53.358715   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:53.359202   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:53.359218   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:53.359493   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:53.360089   14102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:39:53.360135   14102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:53.376204   14102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0906 23:39:53.376634   14102 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:53.377160   14102 main.go:141] libmachine: Using API Version  1
	I0906 23:39:53.377186   14102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:53.377605   14102 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:53.377832   14102 main.go:141] libmachine: (addons-503456) Calling .GetState
	I0906 23:39:53.379535   14102 main.go:141] libmachine: (addons-503456) Calling .DriverName
	I0906 23:39:53.379772   14102 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 23:39:53.379796   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHHostname
	I0906 23:39:53.382435   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:53.382919   14102 main.go:141] libmachine: (addons-503456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:cb:ab", ip: ""} in network mk-addons-503456: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:59 +0000 UTC Type:0 Mac:52:54:00:47:cb:ab Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-503456 Clientid:01:52:54:00:47:cb:ab}
	I0906 23:39:53.382954   14102 main.go:141] libmachine: (addons-503456) DBG | domain addons-503456 has defined IP address 192.168.39.156 and MAC address 52:54:00:47:cb:ab in network mk-addons-503456
	I0906 23:39:53.383095   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHPort
	I0906 23:39:53.383267   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHKeyPath
	I0906 23:39:53.383415   14102 main.go:141] libmachine: (addons-503456) Calling .GetSSHUsername
	I0906 23:39:53.383586   14102 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/addons-503456/id_rsa Username:docker}
	I0906 23:39:53.831544   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:54.348654   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.56133495s)
	I0906 23:39:54.348708   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.348721   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.348760   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.286204887s)
	I0906 23:39:54.348709   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.37122178s)
	I0906 23:39:54.348654   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.557760059s)
	I0906 23:39:54.348794   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.348817   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.348831   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.348835   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.348844   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.016524988s)
	I0906 23:39:54.348866   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.348876   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.348797   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.348893   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.348976   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.94014059s)
	W0906 23:39:54.349010   14102 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:39:54.349051   14102 retry.go:31] will retry after 247.429833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:39:54.349089   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.69349113s)
	I0906 23:39:54.349120   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349130   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349241   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349276   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349285   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349295   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349304   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349344   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349366   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349366   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349375   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349382   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349384   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349392   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349400   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349393   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349435   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349454   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349475   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349482   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349490   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349498   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349651   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349662   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349671   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349679   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349847   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349872   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349880   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.349888   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.349898   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:54.349973   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.349990   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.349996   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.350000   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.350019   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.350027   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.350036   14102 addons.go:467] Verifying addon registry=true in "addons-503456"
	I0906 23:39:54.352620   14102 out.go:177] * Verifying registry addon...
	I0906 23:39:54.350128   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.350146   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.351045   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.351073   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.351098   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:54.351120   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.351141   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.354077   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.354103   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.354115   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.354115   14102 addons.go:467] Verifying addon ingress=true in "addons-503456"
	I0906 23:39:54.354139   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.354177   14102 addons.go:467] Verifying addon metrics-server=true in "addons-503456"
	I0906 23:39:54.355878   14102 out.go:177] * Verifying ingress addon...
	I0906 23:39:54.354974   14102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 23:39:54.357968   14102 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 23:39:54.419099   14102 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 23:39:54.419122   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:54.421587   14102 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 23:39:54.421606   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:54.433063   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:54.444624   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:54.597210   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:55.060836   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:55.213922   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:55.413668   14102 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.03387906s)
	I0906 23:39:55.415552   14102 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:55.413667   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.603583614s)
	I0906 23:39:55.417099   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:55.418688   14102 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0906 23:39:55.417113   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:55.420513   14102 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 23:39:55.420538   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 23:39:55.420698   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:55.420744   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:55.420762   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:55.420776   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:55.421005   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:55.421063   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:55.421080   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:55.421094   14102 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-503456"
	I0906 23:39:55.423619   14102 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 23:39:55.426019   14102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 23:39:55.458157   14102 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 23:39:55.458185   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:55.492047   14102 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 23:39:55.492073   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 23:39:55.497279   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:55.497512   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:55.518268   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:55.536560   14102 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:39:55.536592   14102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0906 23:39:55.588086   14102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:39:55.834340   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:55.962970   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:55.962975   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:56.084906   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:56.452499   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:56.476060   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:56.555271   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:56.952604   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:56.960693   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:57.025178   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:57.243290   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.646034163s)
	I0906 23:39:57.243341   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.243355   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:57.243631   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.243662   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.243673   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.243682   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:57.243905   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.243961   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.243962   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:57.442146   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:57.457441   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:57.529751   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:57.809228   14102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.221104273s)
	I0906 23:39:57.809271   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.809280   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:57.809538   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.809547   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:57.809560   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.809571   14102 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.809581   14102 main.go:141] libmachine: (addons-503456) Calling .Close
	I0906 23:39:57.809847   14102 main.go:141] libmachine: (addons-503456) DBG | Closing plugin on server side
	I0906 23:39:57.809889   14102 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.809904   14102 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.811651   14102 addons.go:467] Verifying addon gcp-auth=true in "addons-503456"
	I0906 23:39:57.813544   14102 out.go:177] * Verifying gcp-auth addon...
	I0906 23:39:57.815922   14102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 23:39:57.863346   14102 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 23:39:57.863368   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:57.892593   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:57.945302   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:57.971735   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:58.031994   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:58.277945   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:58.397744   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:58.438891   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:58.450559   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:58.526065   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:58.907462   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:58.938501   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:58.951749   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:59.032993   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:59.402601   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:59.440924   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:59.449788   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:59.525219   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:59.896930   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:59.940352   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:59.950014   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:00.026019   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:00.397143   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:00.439220   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:00.449138   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:00.524949   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:00.779339   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:00.897056   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:00.940406   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:00.949322   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:01.024253   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:01.398865   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:01.444091   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:01.457313   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:01.542396   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:01.897125   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:01.938852   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:01.949722   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:02.024927   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:02.397019   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:02.441537   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:02.455749   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:02.534146   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:02.790271   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:02.906033   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:02.940488   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:02.958640   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:03.027600   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:03.399077   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:03.451391   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:03.518989   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:03.523970   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:03.902941   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:03.938450   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:03.951659   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:04.024775   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:04.396386   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:04.441803   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:04.450787   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:04.524641   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:04.899449   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:04.939324   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:04.949354   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:05.024521   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:05.321221   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:05.401168   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:05.443167   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:05.450954   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:05.534287   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:05.897306   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:05.970488   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:05.970540   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.024826   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:06.402576   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:06.441460   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.462576   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:06.533292   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:06.898632   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:06.937650   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.949414   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:07.025301   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:07.401846   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:07.438817   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:07.458379   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:07.534940   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:07.778795   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:07.905227   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:08.273923   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:08.274237   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:08.278577   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:08.437998   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:08.441600   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:08.449303   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:08.525835   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:08.897186   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:08.938964   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:08.949780   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:09.028878   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:09.396697   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:09.437918   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:09.449726   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:09.529691   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:09.897546   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:09.937601   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:09.949247   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:10.025735   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:10.278219   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:10.397052   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:10.440719   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:10.451425   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:10.524309   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:10.897885   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:10.938369   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:10.950951   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:11.025642   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:11.397039   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:11.438233   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:11.451377   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:11.626225   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:12.113142   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:12.113505   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:12.115345   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:12.115462   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:12.400993   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:12.438786   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:12.449956   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:12.528331   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:12.777590   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:12.897945   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:12.938983   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:12.950230   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:13.023974   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:13.397253   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:13.438807   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:13.454001   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:13.531151   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:13.898100   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:13.939093   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:13.950118   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:14.026982   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:14.397313   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:14.438357   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:14.448767   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:14.533157   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:14.908364   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:14.908690   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:14.938307   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:14.949226   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:15.025136   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:15.397361   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:15.438836   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:15.449675   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:15.535565   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:15.897769   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:15.938219   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:15.948946   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:16.026610   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:16.508778   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:16.509429   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:16.512044   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:16.530847   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:16.898001   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:16.939275   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:16.949426   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:17.024786   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:17.277450   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:17.398731   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:17.438353   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:17.449922   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:17.524580   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:17.896892   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:17.938532   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:17.949518   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:18.027976   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:18.397093   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:18.439270   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:18.451559   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:18.526187   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:18.906655   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:18.943994   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:18.955356   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:19.025186   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:19.277537   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:19.399856   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:19.438712   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:19.451671   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:19.525265   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:19.896524   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:19.937989   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:19.949337   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:20.030465   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:20.398884   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:20.448601   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:20.459167   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:20.524889   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:20.899626   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:20.941733   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:20.950308   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:21.024825   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:21.396605   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:21.439198   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:21.451881   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:21.530444   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:21.777917   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:21.896569   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:21.938957   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:21.949820   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:22.028735   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:22.396909   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:22.438524   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:22.449038   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:22.523838   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:22.897050   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:22.938887   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:22.951923   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:23.029811   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:23.397175   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:23.438919   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:23.452656   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:23.529734   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:23.778243   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:23.900838   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:23.959034   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:23.978581   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:24.032763   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:24.397441   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:24.438699   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:24.449295   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:24.527220   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:24.898554   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:24.937843   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:24.949154   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:25.024279   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:25.397600   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:25.438122   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:25.450043   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:25.525018   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:25.896826   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:25.940921   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:25.951114   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:26.030141   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:26.410697   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:26.445007   14102 pod_ready.go:102] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:26.450728   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:26.452694   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:26.524202   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:26.896666   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:26.938557   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:26.948925   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:27.029721   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:27.397431   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:27.439817   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:27.449624   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:27.529518   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:27.896411   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:27.937867   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:27.950201   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:28.035895   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:28.282586   14102 pod_ready.go:92] pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.282617   14102 pod_ready.go:81] duration metric: took 41.527589535s waiting for pod "coredns-5dd5756b68-bxt4r" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.282631   14102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.293241   14102 pod_ready.go:92] pod "etcd-addons-503456" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.293266   14102 pod_ready.go:81] duration metric: took 10.626883ms waiting for pod "etcd-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.293278   14102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.300337   14102 pod_ready.go:92] pod "kube-apiserver-addons-503456" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.300360   14102 pod_ready.go:81] duration metric: took 7.073793ms waiting for pod "kube-apiserver-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.300372   14102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.318212   14102 pod_ready.go:92] pod "kube-controller-manager-addons-503456" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.318232   14102 pod_ready.go:81] duration metric: took 17.853769ms waiting for pod "kube-controller-manager-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.318241   14102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llcm4" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.324131   14102 pod_ready.go:92] pod "kube-proxy-llcm4" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.324160   14102 pod_ready.go:81] duration metric: took 5.912942ms waiting for pod "kube-proxy-llcm4" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.324173   14102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.397130   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:28.438970   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:28.449085   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:28.759153   14102 pod_ready.go:92] pod "kube-scheduler-addons-503456" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:28.759178   14102 pod_ready.go:81] duration metric: took 434.998201ms waiting for pod "kube-scheduler-addons-503456" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:28.759189   14102 pod_ready.go:38] duration metric: took 42.018362466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:40:28.759208   14102 api_server.go:52] waiting for apiserver process to appear ...
	I0906 23:40:28.759266   14102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 23:40:28.762676   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:28.817319   14102 api_server.go:72] duration metric: took 42.437378604s to wait for apiserver process to appear ...
	I0906 23:40:28.817342   14102 api_server.go:88] waiting for apiserver healthz status ...
	I0906 23:40:28.817356   14102 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0906 23:40:28.822620   14102 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0906 23:40:28.823975   14102 api_server.go:141] control plane version: v1.28.1
	I0906 23:40:28.823995   14102 api_server.go:131] duration metric: took 6.648434ms to wait for apiserver health ...
	I0906 23:40:28.824002   14102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 23:40:28.880342   14102 system_pods.go:59] 17 kube-system pods found
	I0906 23:40:28.880369   14102 system_pods.go:61] "coredns-5dd5756b68-bxt4r" [856a6766-816b-442c-8fc5-2935d6625ca7] Running
	I0906 23:40:28.880374   14102 system_pods.go:61] "csi-hostpath-attacher-0" [4c8303db-6c3d-4d07-98a0-57123c31afe2] Running
	I0906 23:40:28.880381   14102 system_pods.go:61] "csi-hostpath-resizer-0" [4259495f-99a0-4e14-bc0d-de46a7cf9764] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 23:40:28.880388   14102 system_pods.go:61] "csi-hostpathplugin-rktxd" [cdb9b73b-1392-4195-bc59-3d9c570e0611] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:40:28.880394   14102 system_pods.go:61] "etcd-addons-503456" [f12e65b7-7a11-4747-9467-333b74c962f4] Running
	I0906 23:40:28.880400   14102 system_pods.go:61] "kube-apiserver-addons-503456" [50dd3b86-aa5f-4130-aa8b-d5e1628ac2da] Running
	I0906 23:40:28.880404   14102 system_pods.go:61] "kube-controller-manager-addons-503456" [a93e10b6-ebcb-42cd-a74d-3774ab84a812] Running
	I0906 23:40:28.880410   14102 system_pods.go:61] "kube-ingress-dns-minikube" [ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 23:40:28.880414   14102 system_pods.go:61] "kube-proxy-llcm4" [f0d0f236-b54c-4731-95ae-e365788583da] Running
	I0906 23:40:28.880418   14102 system_pods.go:61] "kube-scheduler-addons-503456" [51031224-f553-4c51-b856-584644b2f297] Running
	I0906 23:40:28.880424   14102 system_pods.go:61] "metrics-server-7c66d45ddc-4v28l" [628112ae-73d6-4779-a757-b6197698e1d5] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 23:40:28.880429   14102 system_pods.go:61] "registry-proxy-smcjh" [54d212ef-6349-4b44-99f7-bc51cb724809] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:40:28.880437   14102 system_pods.go:61] "registry-wtw27" [eeb1866f-e448-437f-b333-3d93f770b680] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 23:40:28.880442   14102 system_pods.go:61] "snapshot-controller-58dbcc7b99-6s752" [66a7888a-51c5-4b89-99f5-2bbdd5110337] Running
	I0906 23:40:28.880446   14102 system_pods.go:61] "snapshot-controller-58dbcc7b99-nhbjv" [4ec9e9ff-6e45-4a91-86ab-51c69801f402] Running
	I0906 23:40:28.880450   14102 system_pods.go:61] "storage-provisioner" [f1cd0f63-f9ac-483c-b030-d6b148a81d8a] Running
	I0906 23:40:28.880462   14102 system_pods.go:61] "tiller-deploy-7b677967b9-7ns7n" [1fcb101f-c09b-4237-be12-23fbd6b68cda] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 23:40:28.880469   14102 system_pods.go:74] duration metric: took 56.462668ms to wait for pod list to return data ...
	I0906 23:40:28.880476   14102 default_sa.go:34] waiting for default service account to be created ...
	I0906 23:40:28.897504   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:28.938309   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:28.958457   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:29.039676   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:29.084874   14102 default_sa.go:45] found service account: "default"
	I0906 23:40:29.084894   14102 default_sa.go:55] duration metric: took 204.413921ms for default service account to be created ...
	I0906 23:40:29.084902   14102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 23:40:29.281238   14102 system_pods.go:86] 17 kube-system pods found
	I0906 23:40:29.281265   14102 system_pods.go:89] "coredns-5dd5756b68-bxt4r" [856a6766-816b-442c-8fc5-2935d6625ca7] Running
	I0906 23:40:29.281270   14102 system_pods.go:89] "csi-hostpath-attacher-0" [4c8303db-6c3d-4d07-98a0-57123c31afe2] Running
	I0906 23:40:29.281277   14102 system_pods.go:89] "csi-hostpath-resizer-0" [4259495f-99a0-4e14-bc0d-de46a7cf9764] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 23:40:29.281286   14102 system_pods.go:89] "csi-hostpathplugin-rktxd" [cdb9b73b-1392-4195-bc59-3d9c570e0611] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:40:29.281292   14102 system_pods.go:89] "etcd-addons-503456" [f12e65b7-7a11-4747-9467-333b74c962f4] Running
	I0906 23:40:29.281296   14102 system_pods.go:89] "kube-apiserver-addons-503456" [50dd3b86-aa5f-4130-aa8b-d5e1628ac2da] Running
	I0906 23:40:29.281301   14102 system_pods.go:89] "kube-controller-manager-addons-503456" [a93e10b6-ebcb-42cd-a74d-3774ab84a812] Running
	I0906 23:40:29.281306   14102 system_pods.go:89] "kube-ingress-dns-minikube" [ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 23:40:29.281310   14102 system_pods.go:89] "kube-proxy-llcm4" [f0d0f236-b54c-4731-95ae-e365788583da] Running
	I0906 23:40:29.281315   14102 system_pods.go:89] "kube-scheduler-addons-503456" [51031224-f553-4c51-b856-584644b2f297] Running
	I0906 23:40:29.281320   14102 system_pods.go:89] "metrics-server-7c66d45ddc-4v28l" [628112ae-73d6-4779-a757-b6197698e1d5] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 23:40:29.281325   14102 system_pods.go:89] "registry-proxy-smcjh" [54d212ef-6349-4b44-99f7-bc51cb724809] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:40:29.281331   14102 system_pods.go:89] "registry-wtw27" [eeb1866f-e448-437f-b333-3d93f770b680] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 23:40:29.281335   14102 system_pods.go:89] "snapshot-controller-58dbcc7b99-6s752" [66a7888a-51c5-4b89-99f5-2bbdd5110337] Running
	I0906 23:40:29.281340   14102 system_pods.go:89] "snapshot-controller-58dbcc7b99-nhbjv" [4ec9e9ff-6e45-4a91-86ab-51c69801f402] Running
	I0906 23:40:29.281344   14102 system_pods.go:89] "storage-provisioner" [f1cd0f63-f9ac-483c-b030-d6b148a81d8a] Running
	I0906 23:40:29.281351   14102 system_pods.go:89] "tiller-deploy-7b677967b9-7ns7n" [1fcb101f-c09b-4237-be12-23fbd6b68cda] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 23:40:29.281357   14102 system_pods.go:126] duration metric: took 196.451146ms to wait for k8s-apps to be running ...
	I0906 23:40:29.281367   14102 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 23:40:29.281408   14102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:40:29.315836   14102 system_svc.go:56] duration metric: took 34.461249ms WaitForService to wait for kubelet.
	I0906 23:40:29.315857   14102 kubeadm.go:581] duration metric: took 42.935920461s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 23:40:29.315882   14102 node_conditions.go:102] verifying NodePressure condition ...
	I0906 23:40:29.399032   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:29.438127   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:29.448640   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:29.476241   14102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0906 23:40:29.476266   14102 node_conditions.go:123] node cpu capacity is 2
	I0906 23:40:29.476276   14102 node_conditions.go:105] duration metric: took 160.389934ms to run NodePressure ...
	I0906 23:40:29.476287   14102 start.go:228] waiting for startup goroutines ...
	I0906 23:40:29.524933   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:29.898388   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:29.939565   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:29.949590   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:30.024538   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:30.399565   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:30.438408   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:30.448805   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:30.529841   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:30.899056   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:30.939855   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:30.949740   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:31.029898   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:31.402736   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:31.438327   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:31.454583   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:31.525179   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:31.896684   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:31.940605   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:31.949708   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:32.034870   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:32.693911   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:32.694062   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:32.694753   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:32.697417   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:32.897519   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:32.941758   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:32.953680   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:33.024366   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:33.400267   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:33.439652   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:33.449673   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:33.524899   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:33.896606   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:33.955397   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:33.989766   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:34.031985   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:34.397512   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:34.438445   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:34.448645   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:34.527185   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:34.899278   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:34.940174   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:34.949528   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:35.028707   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:35.396649   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:35.439033   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:35.449779   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:35.526313   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:35.897888   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:35.940175   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:35.949739   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:36.030881   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:36.396767   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:36.442069   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:36.449608   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:36.529021   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:36.897254   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:36.939634   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:36.949033   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:37.024074   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:37.398274   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:37.439341   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:37.448948   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:37.525562   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:37.897029   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:37.938522   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:37.950869   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:38.025396   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:38.397043   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:38.442549   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:38.451916   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:38.525027   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:38.911465   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:38.938128   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:38.948847   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:39.026397   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:39.396471   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:39.438477   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:39.448899   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:39.526351   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:39.905445   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:39.966869   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:39.968715   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:40.024758   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:40.397072   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:40.439025   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:40.448696   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:40.528233   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:40.897469   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:40.943684   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:40.955956   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:41.024903   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:41.397974   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:41.438126   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:41.449198   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:41.527658   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:41.899730   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:41.938808   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:41.949268   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:42.024585   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:42.399415   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:42.437608   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:42.450765   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:42.524461   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:42.897557   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:42.938511   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:42.949857   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:43.029079   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:43.396683   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:43.438324   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:43.449660   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:43.530370   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:43.898449   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:43.938098   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:43.952196   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:44.025738   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:44.397008   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:44.438498   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:44.453614   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:44.525155   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:44.897067   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:45.064570   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:45.064809   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:45.066833   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:45.397046   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:45.438937   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:45.451871   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:45.531950   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:45.897726   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:45.938046   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:45.950412   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:46.024521   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:46.397000   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:46.437669   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:46.449651   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:46.540551   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:46.896946   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:46.938578   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:46.949975   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:47.032692   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:47.397185   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:47.438531   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:47.451901   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:47.527189   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:47.897121   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:47.939239   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:47.952826   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:48.025101   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:48.397305   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:48.437954   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:48.450497   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:48.528527   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:48.900424   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:48.937852   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:48.949828   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:49.025047   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:49.396376   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:49.438212   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:49.450098   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:49.523675   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:49.897042   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:49.938364   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:49.948721   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:50.024902   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:50.397203   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:50.438439   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:50.448654   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:50.527475   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:50.896882   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:50.938156   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:50.948781   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:51.024972   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:51.397341   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:51.438477   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:51.450551   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:51.533941   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:51.899061   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:51.942361   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:51.949097   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:52.029339   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:52.396630   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:52.437691   14102 kapi.go:107] duration metric: took 58.08271419s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 23:40:52.451243   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:52.528045   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:52.900605   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:52.950448   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:53.025130   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:53.397299   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:53.450163   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:53.523911   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:53.899574   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:53.976757   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:54.039005   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:54.398605   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:54.670378   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:54.670449   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:54.899481   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:54.951302   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:55.025325   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:55.397234   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:55.455608   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:55.531097   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:55.901128   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:55.951062   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:56.024549   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:56.397605   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:56.450163   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:56.529401   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:56.896205   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:56.952549   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:57.028060   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:57.396619   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:57.450248   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:57.525454   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:57.897554   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:57.949670   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:58.028901   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:58.398050   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:58.450721   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:58.526892   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:58.897123   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:58.951043   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:59.026083   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:59.397750   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:59.453791   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:59.536449   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:59.897774   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:59.950574   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:00.025093   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:00.397058   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:00.449987   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:00.525045   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:00.896691   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:00.950542   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:01.026495   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:01.422756   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:01.453345   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:01.524810   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:01.898206   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:01.950502   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:02.031685   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:02.399470   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:02.457710   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:02.525074   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:02.896917   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:02.950632   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:03.032076   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:03.397293   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:03.449722   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:03.525855   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:03.897408   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:03.958586   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:04.024669   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:04.397143   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:04.451694   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:04.525667   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:04.961286   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:04.962816   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:05.026103   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:05.396889   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:05.449895   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:05.529262   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:05.897539   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:05.949903   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:06.025804   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:06.396402   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:06.449254   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:06.531013   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:06.897405   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:06.949876   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:07.025204   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:07.396691   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:07.454590   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:07.530803   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:07.896763   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:07.949879   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:08.025380   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:08.397758   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:08.450503   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:08.533320   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:08.899665   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:08.963873   14102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:09.028643   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:09.399693   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:09.449992   14102 kapi.go:107] duration metric: took 1m15.092021695s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 23:41:09.524480   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:09.897406   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:10.027649   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:10.397301   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:10.533766   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:10.897395   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:11.024989   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:11.399424   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:11.527406   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:11.897687   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:12.034424   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:12.396627   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:12.524155   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:12.897691   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:13.031255   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:13.396957   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:13.526431   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:13.904886   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:14.026992   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:14.396925   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:14.525134   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:14.897239   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:15.028555   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:15.406166   14102 kapi.go:107] duration metric: took 1m17.590236496s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 23:41:15.407905   14102 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-503456 cluster.
	I0906 23:41:15.409460   14102 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 23:41:15.410881   14102 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 23:41:15.528119   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:16.024194   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:16.525048   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:17.028844   14102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:41:17.523772   14102 kapi.go:107] duration metric: took 1m22.097748949s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 23:41:17.525741   14102 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner, helm-tiller, cloud-spanner, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0906 23:41:17.527241   14102 addons.go:502] enable addons completed in 1m31.240263256s: enabled=[default-storageclass ingress-dns storage-provisioner helm-tiller cloud-spanner inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0906 23:41:17.527282   14102 start.go:233] waiting for cluster config update ...
	I0906 23:41:17.527302   14102 start.go:242] writing updated cluster config ...
	I0906 23:41:17.527562   14102 ssh_runner.go:195] Run: rm -f paused
	I0906 23:41:17.582276   14102 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 23:41:17.584444   14102 out.go:177] * Done! kubectl is now configured to use "addons-503456" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-09-06 23:38:56 UTC, ends at Wed 2023-09-06 23:44:11 UTC. --
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.185802584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=94425d01-20bb-4a19-8d80-fbe257a5939c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.186191962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=94425d01-20bb-4a19-8d80-fbe257a5939c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.219469881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fcd0e510-e57d-4634-9537-00dd6a9c6c59 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.219534983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fcd0e510-e57d-4634-9537-00dd6a9c6c59 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.219952681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fcd0e510-e57d-4634-9537-00dd6a9c6c59 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.253521295Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b60a0f7-84e2-4cf2-9cd2-753b780378cf name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.253952674Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-5rd75,Uid:9503445f-d043-4e1b-9cd0-0515175b9382,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043841226038822,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:44:00.894136441Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&PodSandboxMetadata{Name:nginx,Uid:1d8f5361-915b-49a8-8113-8d0c061764cc,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1694043696606911712,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:41:36.277068703Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&PodSandboxMetadata{Name:headlamp-699c48fb74-8bfx2,Uid:74712abb-a7f5-4f24-9c48-90a8918e78bb,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043679514990105,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 74712abb-a7f5-4f24-9c48-90a8918e78bb,pod-template-hash: 699c48fb74,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-
09-06T23:41:19.183131158Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-ksld8,Uid:7e7f38a0-d0e6-4de3-8683-bf18f7864051,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043662073117320,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:39:57.746941846Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f1cd0f63-f9ac-483c-b030-d6b148a81d8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043593611
251315,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"
tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-06T23:39:52.972283956Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-bxt4r,Uid:856a6766-816b-442c-8fc5-2935d6625ca7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043587993825618,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:39:47.657462365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&PodSandboxMetadata{Name:kube-proxy-llcm4,Uid:f0d0f236-b54c-4731-95ae-e365788583da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1
694043586225802644,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:39:45.889388787Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&PodSandboxMetadata{Name:etcd-addons-503456,Uid:37882cc1315adb60311b506d37044e42,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043564693827622,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.156:2379,kubernetes.io/config.hash: 37882cc1315adb60311b506d37044e42,kubernetes.io/config.seen: 2023-09-06T23:39:24.099422154Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-503456,Uid:60462fc0d58e368def56f7e57aabe683,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043564674033948,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.156:8443,kubernetes.io/config.hash: 60462fc0d58e368def56f7e57aabe683,kubernetes.io/config.seen: 2023-09-06T23:39:24.099423075Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0af4dbe
dd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-503456,Uid:22a518658c3e1af92c0ab5f72875e953,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043564647782660,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 22a518658c3e1af92c0ab5f72875e953,kubernetes.io/config.seen: 2023-09-06T23:39:24.099416717Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-503456,Uid:6eed804f5be6b4daf21ebf8f428e18f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694043564633656438,Labels:map[string]string{compone
nt: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6eed804f5be6b4daf21ebf8f428e18f8,kubernetes.io/config.seen: 2023-09-06T23:39:24.099421131Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1b60a0f7-84e2-4cf2-9cd2-753b780378cf name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.254743167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=660e9f6d-f344-4002-b7b7-433bcf6d733d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.254834684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=660e9f6d-f344-4002-b7b7-433bcf6d733d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.255127202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694
043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694043592170046669,Labels:map[string]string{io.kubernetes.c
ontainer.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:reg
istry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3da
f38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522e
ac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c5270
5de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=660e9f6d-f344-4002-b7b7-433bcf6d733d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.255469490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=911b062f-a5ab-4623-8673-35b9c670ca96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.255514513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=911b062f-a5ab-4623-8673-35b9c670ca96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.255917249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=911b062f-a5ab-4623-8673-35b9c670ca96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.292407858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=974dde95-9290-4912-9b02-5a5d20476ab6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.292557805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=974dde95-9290-4912-9b02-5a5d20476ab6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.293129548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=974dde95-9290-4912-9b02-5a5d20476ab6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.328404061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bdcc8739-41b0-4afc-9ada-6d4bb0d14912 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.328475338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bdcc8739-41b0-4afc-9ada-6d4bb0d14912 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.330445169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bdcc8739-41b0-4afc-9ada-6d4bb0d14912 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.369322383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=682bf5bd-f73b-4bbd-8a1a-eef8f389c753 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.369394257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=682bf5bd-f73b-4bbd-8a1a-eef8f389c753 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.369828348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=682bf5bd-f73b-4bbd-8a1a-eef8f389c753 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.407227294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd8d5292-418e-4598-8df5-06ae6f463466 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.407319631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd8d5292-418e-4598-8df5-06ae6f463466 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:44:11 addons-503456 crio[717]: time="2023-09-06 23:44:11.407681347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3856f90afe390bb31c6573792ffaea71d666fffab8b146946a9cd004849c0154,PodSandboxId:c001d2ea99bbd1b9fc1e152a2c81b0e330d346afc60ad41fb0aa97de561abd30,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694043844209147182,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5rd75,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9503445f-d043-4e1b-9cd0-0515175b9382,},Annotations:map[string]string{io.kubernetes.container.hash: 2c19b625,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:245d9f5ff2071f5655e0f0f7c37fbcf09daa7e161119e1d40391f89e6aa236c4,PodSandboxId:7c8b3a931a3d2ab794357b93cf9f6d41501a41b0b50b87c57fac168cbfe39968,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694043702132324613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d8f5361-915b-49a8-8113-8d0c061764cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 48fcdd6e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f46fdbfd60b95c546c93157f52b441b51db6fd9bb450fe00a9a194b2da497b,PodSandboxId:12d2747137e94f9e3fb201cd1cc5b88746b3552b457fed7e7e074a84ac3e43e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694043687468685847,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8bfx2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 74712abb-a7f5-4f24-9c48-90a8918e78bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4d82036c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7,PodSandboxId:d4e431feee7cd06e12721863117086f45ab8e284456954e7f51213d2fa42efa7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694043674796720948,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ksld8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 7e7f38a0-d0e6-4de3-8683-bf18f7864051,},Annotations:map[string]string{io.kubernetes.container.hash: ce6649cc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9705ba63315e4a58a207ad87d0b3d6bffa8081457bafbfcafb715d8f9c3faec7,PodSandboxId:7736d629305acdd958a698b6acde410055c0939f4b81a2e861995bd57f654c4c,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16940436
68405087110,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5htks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4e3fee77-9944-48ae-bb9b-82a216c2ab19,},Annotations:map[string]string{io.kubernetes.container.hash: fd9f6a5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a0905abdcf5f705c1a9ef4ec8a09902a4a7bb5a3c9c23c6b8bb9c0f1d8c410,PodSandboxId:f05d372770c918e5a0074a1a5fbe740e8685a1b612a3031fb6f87c8afc8d8bcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694043655471642604,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gnp6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd27e72d-ab29-4da2-8121-e5eb76cfd15d,},Annotations:map[string]string{io.kubernetes.container.hash: 96e59954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16,PodSandboxId:b00b6676618812592810b1ade3ca8951bc47881087c283ca573a43fd4768f7f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694043601225215559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1cd0f63-f9ac-483c-b030-d6b148a81d8a,},Annotations:map[string]string{io.kubernetes.container.hash: c979a933,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213,PodSandboxId:c9c25d25918ebbb6d881f4921738d1d2eb975221fd1ca4a823e1b6f8622e1e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694043588451988954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d0f236-b54c-4731-95ae-e365788583da,},Annotations:map[string]string{io.kubernetes.container.hash: 946a1e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d,PodSandboxId:b0a38bd4d572636ca137f581d236c4a04f08d145f23f1720867a7955cc385b53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694043592170046669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bxt4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856a6766-816b-442c-8fc5-2935d6625ca7,},Annotations:map[string]string{io.kubernetes.container.hash: db14a119,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768,PodSandboxId:4da890f6dbdee5b0f8276e53e367ed3377b5174e3031ba97da8f44ec7c28d137,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d
35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694043565675835012,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37882cc1315adb60311b506d37044e42,},Annotations:map[string]string{io.kubernetes.container.hash: b2eedab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb,PodSandboxId:5ddbf6569832e31efcb85d8b7ebb81cb458afcbbfc424d6a94692d2a92449fc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},
},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694043565473197252,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60462fc0d58e368def56f7e57aabe683,},Annotations:map[string]string{io.kubernetes.container.hash: af0dbb3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813,PodSandboxId:36edb84689922ef721519b10f5cdc0ee48dbcca963722a32010366111f52e9c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:regi
stry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694043565351393254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eed804f5be6b4daf21ebf8f428e18f8,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c,PodSandboxId:0af4dbedd8fd4e7c4b77165ac37043601a95da405e01d2ab9fea74baac45d582,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694043565084451560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-503456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a518658c3e1af92c0ab5f72875e953,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd8d5292-418e-4598-8df5-06ae6f463466 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	3856f90afe390       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      7 seconds ago       Running             hello-world-app           0                   c001d2ea99bbd
	245d9f5ff2071       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   7c8b3a931a3d2
	25f46fdbfd60b       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   12d2747137e94
	f503fb0371ce4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   d4e431feee7cd
	9705ba63315e4       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             3 minutes ago       Exited              patch                     2                   7736d629305ac
	38a0905abdcf5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   f05d372770c91
	16d54fe665a58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b00b667661881
	12227c776cd2c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   b0a38bd4d5726
	7b43deeb323b4       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                             4 minutes ago       Running             kube-proxy                0                   c9c25d25918eb
	722f497830d72       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   4da890f6dbdee
	ffcd2e6f2b213       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                             4 minutes ago       Running             kube-apiserver            0                   5ddbf6569832e
	156a1f97803d7       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                             4 minutes ago       Running             kube-scheduler            0                   36edb84689922
	7bc347f55dfa8       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                             4 minutes ago       Running             kube-controller-manager   0                   0af4dbedd8fd4
	
	* 
	* ==> coredns [12227c776cd2cada611a2e3413a349129501b587e50fe78b8af32379e5d6251d] <==
	* [INFO] 10.244.0.7:42960 - 56720 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092631s
	[INFO] 10.244.0.7:57296 - 55585 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077109s
	[INFO] 10.244.0.7:57296 - 53027 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082318s
	[INFO] 10.244.0.7:38459 - 26610 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054164s
	[INFO] 10.244.0.7:38459 - 17648 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008507s
	[INFO] 10.244.0.7:60706 - 46240 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077091s
	[INFO] 10.244.0.7:60706 - 9634 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064003s
	[INFO] 10.244.0.7:36873 - 4175 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085795s
	[INFO] 10.244.0.7:36873 - 16204 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075675s
	[INFO] 10.244.0.7:44283 - 60311 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064027s
	[INFO] 10.244.0.7:44283 - 2281 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088659s
	[INFO] 10.244.0.7:52066 - 4051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067717s
	[INFO] 10.244.0.7:52066 - 55277 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069735s
	[INFO] 10.244.0.7:40610 - 39583 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076667s
	[INFO] 10.244.0.7:40610 - 30109 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000275229s
	[INFO] 10.244.0.18:47144 - 6824 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000319606s
	[INFO] 10.244.0.18:36018 - 1624 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001283948s
	[INFO] 10.244.0.18:47513 - 21636 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000401961s
	[INFO] 10.244.0.18:44275 - 47729 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001286186s
	[INFO] 10.244.0.18:58959 - 19332 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157518s
	[INFO] 10.244.0.18:56952 - 17248 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006026s
	[INFO] 10.244.0.18:53269 - 8808 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.001495576s
	[INFO] 10.244.0.18:40285 - 42531 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.002089312s
	[INFO] 10.244.0.20:42772 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000497143s
	[INFO] 10.244.0.20:32974 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144842s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-503456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-503456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=addons-503456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T23_39_33_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-503456
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:39:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-503456
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:44:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:44:09 +0000   Wed, 06 Sep 2023 23:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:44:09 +0000   Wed, 06 Sep 2023 23:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:44:09 +0000   Wed, 06 Sep 2023 23:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 23:44:09 +0000   Wed, 06 Sep 2023 23:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    addons-503456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 88f1f12841904f1e98a0920d6c87551c
	  System UUID:                88f1f128-4190-4f1e-98a0-920d6c87551c
	  Boot ID:                    71e40ef0-bc37-463e-a6bb-051c343be4af
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-5rd75         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-ksld8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  headlamp                    headlamp-699c48fb74-8bfx2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 coredns-5dd5756b68-bxt4r                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m26s
	  kube-system                 etcd-addons-503456                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-apiserver-addons-503456             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-controller-manager-addons-503456    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-llcm4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-addons-503456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node addons-503456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node addons-503456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node addons-503456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s                  kubelet          Node addons-503456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s                  kubelet          Node addons-503456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s                  kubelet          Node addons-503456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m38s                  kubelet          Node addons-503456 status is now: NodeReady
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-503456 event: Registered Node addons-503456 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.414188] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.336161] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143266] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Sep 6 23:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.445200] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.109868] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.140663] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.111796] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.190085] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.056758] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.784942] systemd-fstab-generator[1250]: Ignoring "noauto" for root device
	[ +20.272526] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.625870] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 23:40] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.887652] kauditd_printk_skb: 14 callbacks suppressed
	[Sep 6 23:41] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.242022] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.056591] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.006895] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.780631] kauditd_printk_skb: 26 callbacks suppressed
	[Sep 6 23:42] kauditd_printk_skb: 12 callbacks suppressed
	[Sep 6 23:44] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [722f497830d7292f6fd0e488d4a9fa3b47a61448f04fdee7aa3bc6c525827768] <==
	* {"level":"info","ts":"2023-09-06T23:40:45.052426Z","caller":"traceutil/trace.go:171","msg":"trace[1312448209] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:959; }","duration":"119.931228ms","start":"2023-09-06T23:40:44.932483Z","end":"2023-09-06T23:40:45.052414Z","steps":["trace[1312448209] 'range keys from in-memory index tree'  (duration: 119.680716ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:45.052726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.952108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13448"}
	{"level":"info","ts":"2023-09-06T23:40:45.052804Z","caller":"traceutil/trace.go:171","msg":"trace[998702654] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:959; }","duration":"109.032862ms","start":"2023-09-06T23:40:44.943762Z","end":"2023-09-06T23:40:45.052795Z","steps":["trace[998702654] 'range keys from in-memory index tree'  (duration: 108.875067ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:54.656894Z","caller":"traceutil/trace.go:171","msg":"trace[1322429296] linearizableReadLoop","detail":"{readStateIndex:1032; appliedIndex:1031; }","duration":"211.047311ms","start":"2023-09-06T23:40:54.445821Z","end":"2023-09-06T23:40:54.656868Z","steps":["trace[1322429296] 'read index received'  (duration: 210.845038ms)","trace[1322429296] 'applied index is now lower than readState.Index'  (duration: 201.763µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T23:40:54.657166Z","caller":"traceutil/trace.go:171","msg":"trace[1376030123] transaction","detail":"{read_only:false; response_revision:998; number_of_response:1; }","duration":"248.610599ms","start":"2023-09-06T23:40:54.408547Z","end":"2023-09-06T23:40:54.657158Z","steps":["trace[1376030123] 'process raft request'  (duration: 248.162434ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:54.65735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.539402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13946"}
	{"level":"info","ts":"2023-09-06T23:40:54.657455Z","caller":"traceutil/trace.go:171","msg":"trace[1880856548] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:998; }","duration":"211.654106ms","start":"2023-09-06T23:40:54.445792Z","end":"2023-09-06T23:40:54.657446Z","steps":["trace[1880856548] 'agreement among raft nodes before linearized reading'  (duration: 211.505041ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:54.657755Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.809729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78579"}
	{"level":"info","ts":"2023-09-06T23:40:54.657806Z","caller":"traceutil/trace.go:171","msg":"trace[1839796989] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:998; }","duration":"140.865671ms","start":"2023-09-06T23:40:54.516935Z","end":"2023-09-06T23:40:54.6578Z","steps":["trace[1839796989] 'agreement among raft nodes before linearized reading'  (duration: 140.726942ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:54.658484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.174799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5htks\" ","response":"range_response_count:1 size:4391"}
	{"level":"info","ts":"2023-09-06T23:40:54.658539Z","caller":"traceutil/trace.go:171","msg":"trace[592412957] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5htks; range_end:; response_count:1; response_revision:998; }","duration":"100.23195ms","start":"2023-09-06T23:40:54.558299Z","end":"2023-09-06T23:40:54.658531Z","steps":["trace[592412957] 'agreement among raft nodes before linearized reading'  (duration: 100.160631ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:54.823786Z","caller":"traceutil/trace.go:171","msg":"trace[1091476705] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"162.4372ms","start":"2023-09-06T23:40:54.661328Z","end":"2023-09-06T23:40:54.823765Z","steps":["trace[1091476705] 'process raft request'  (duration: 149.54322ms)","trace[1091476705] 'compare'  (duration: 12.428953ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T23:40:54.823795Z","caller":"traceutil/trace.go:171","msg":"trace[1190615270] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"154.845541ms","start":"2023-09-06T23:40:54.668935Z","end":"2023-09-06T23:40:54.823781Z","steps":["trace[1190615270] 'process raft request'  (duration: 154.785896ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:54.824107Z","caller":"traceutil/trace.go:171","msg":"trace[125737956] linearizableReadLoop","detail":"{readStateIndex:1033; appliedIndex:1032; }","duration":"160.373082ms","start":"2023-09-06T23:40:54.663726Z","end":"2023-09-06T23:40:54.824099Z","steps":["trace[125737956] 'read index received'  (duration: 147.15243ms)","trace[125737956] 'applied index is now lower than readState.Index'  (duration: 13.219568ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-06T23:40:54.824216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.490984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-06T23:40:54.82623Z","caller":"traceutil/trace.go:171","msg":"trace[347604496] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1000; }","duration":"162.415186ms","start":"2023-09-06T23:40:54.663707Z","end":"2023-09-06T23:40:54.826122Z","steps":["trace[347604496] 'agreement among raft nodes before linearized reading'  (duration: 160.449366ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:01.416277Z","caller":"traceutil/trace.go:171","msg":"trace[1944371707] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"277.599444ms","start":"2023-09-06T23:41:01.138661Z","end":"2023-09-06T23:41:01.41626Z","steps":["trace[1944371707] 'process raft request'  (duration: 277.308811ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:04.954959Z","caller":"traceutil/trace.go:171","msg":"trace[746205497] transaction","detail":"{read_only:false; response_revision:1056; number_of_response:1; }","duration":"280.185037ms","start":"2023-09-06T23:41:04.674758Z","end":"2023-09-06T23:41:04.954943Z","steps":["trace[746205497] 'process raft request'  (duration: 279.89364ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:25.286489Z","caller":"traceutil/trace.go:171","msg":"trace[1781457360] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"280.350086ms","start":"2023-09-06T23:41:25.006113Z","end":"2023-09-06T23:41:25.286463Z","steps":["trace[1781457360] 'process raft request'  (duration: 279.82361ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:41:25.2882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.147054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2023-09-06T23:41:25.288275Z","caller":"traceutil/trace.go:171","msg":"trace[292427400] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1202; }","duration":"264.275628ms","start":"2023-09-06T23:41:25.023989Z","end":"2023-09-06T23:41:25.288264Z","steps":["trace[292427400] 'agreement among raft nodes before linearized reading'  (duration: 264.089005ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:25.290077Z","caller":"traceutil/trace.go:171","msg":"trace[837510657] linearizableReadLoop","detail":"{readStateIndex:1244; appliedIndex:1243; }","duration":"262.031084ms","start":"2023-09-06T23:41:25.024041Z","end":"2023-09-06T23:41:25.286072Z","steps":["trace[837510657] 'read index received'  (duration: 261.85407ms)","trace[837510657] 'applied index is now lower than readState.Index'  (duration: 176.536µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-06T23:41:25.290891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.233734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2023-09-06T23:41:25.290965Z","caller":"traceutil/trace.go:171","msg":"trace[1546648090] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1202; }","duration":"260.316617ms","start":"2023-09-06T23:41:25.030637Z","end":"2023-09-06T23:41:25.290953Z","steps":["trace[1546648090] 'agreement among raft nodes before linearized reading'  (duration: 260.14614ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:42:10.083474Z","caller":"traceutil/trace.go:171","msg":"trace[1631625090] transaction","detail":"{read_only:false; response_revision:1395; number_of_response:1; }","duration":"110.470999ms","start":"2023-09-06T23:42:09.972979Z","end":"2023-09-06T23:42:10.08345Z","steps":["trace[1631625090] 'process raft request'  (duration: 110.245669ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f503fb0371ce418096473cfec46411466fcf153cc27d0af0bf581dfacd0122f7] <==
	* 2023/09/06 23:41:14 GCP Auth Webhook started!
	2023/09/06 23:41:19 Ready to marshal response ...
	2023/09/06 23:41:19 Ready to write response ...
	2023/09/06 23:41:19 Ready to marshal response ...
	2023/09/06 23:41:19 Ready to write response ...
	2023/09/06 23:41:19 Ready to marshal response ...
	2023/09/06 23:41:19 Ready to write response ...
	2023/09/06 23:41:27 Ready to marshal response ...
	2023/09/06 23:41:27 Ready to write response ...
	2023/09/06 23:41:28 Ready to marshal response ...
	2023/09/06 23:41:28 Ready to write response ...
	2023/09/06 23:41:36 Ready to marshal response ...
	2023/09/06 23:41:36 Ready to write response ...
	2023/09/06 23:42:10 Ready to marshal response ...
	2023/09/06 23:42:10 Ready to write response ...
	2023/09/06 23:42:37 Ready to marshal response ...
	2023/09/06 23:42:37 Ready to write response ...
	2023/09/06 23:44:00 Ready to marshal response ...
	2023/09/06 23:44:00 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:44:11 up 5 min,  0 users,  load average: 0.92, 1.50, 0.77
	Linux addons-503456 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ffcd2e6f2b213e96498a9a342c50705d1056748d4a2edd13481e673529b1c1eb] <==
	* E0906 23:42:33.095960       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 23:42:33.095968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 23:42:54.271947       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.272017       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.282412       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.282512       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.299922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.299989       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.319311       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.319382       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.332461       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.332538       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.356400       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.356508       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:42:54.368808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:42:54.368914       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0906 23:42:54.399324       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0906 23:42:54.399405       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0906 23:42:54.402006       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0906 23:42:54.402082       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0906 23:42:55.320212       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 23:42:55.369298       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 23:42:55.391341       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0906 23:44:01.063106       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.180.99"}
	
	* 
	* ==> kube-controller-manager [7bc347f55dfa86e8f247a1aab30b2e3d2155c5876f81f5ed13b250754323996c] <==
	* I0906 23:43:16.482124       1 shared_informer.go:318] Caches are synced for garbage collector
	W0906 23:43:16.767947       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:16.768003       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:43:25.219842       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:25.219961       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:43:32.659833       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:32.660081       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:43:38.396397       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:38.396554       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:43:45.363291       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:45.363343       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:43:59.189854       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:43:59.189956       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 23:44:00.825730       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0906 23:44:00.881151       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-5rd75"
	I0906 23:44:00.890849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.619776ms"
	I0906 23:44:00.932347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.323831ms"
	I0906 23:44:00.933402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.108µs"
	W0906 23:44:01.626220       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:44:01.626248       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 23:44:03.458054       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0906 23:44:03.467144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="7.619µs"
	I0906 23:44:03.467517       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0906 23:44:04.782268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.835491ms"
	I0906 23:44:04.782676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="165.055µs"
	
	* 
	* ==> kube-proxy [7b43deeb323b4af75eced39d3cab94f8fa2c694af92c3ee775eb00a91831e213] <==
	* I0906 23:39:57.617857       1 server_others.go:69] "Using iptables proxy"
	I0906 23:39:57.956962       1 node.go:141] Successfully retrieved node IP: 192.168.39.156
	I0906 23:39:58.403224       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 23:39:58.403270       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 23:39:58.420870       1 server_others.go:152] "Using iptables Proxier"
	I0906 23:39:58.437091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 23:39:58.446084       1 server.go:846] "Version info" version="v1.28.1"
	I0906 23:39:58.449962       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:39:58.484272       1 config.go:188] "Starting service config controller"
	I0906 23:39:58.551085       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 23:39:58.484519       1 config.go:97] "Starting endpoint slice config controller"
	I0906 23:39:58.551449       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 23:39:58.509875       1 config.go:315] "Starting node config controller"
	I0906 23:39:58.551802       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 23:39:58.651764       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 23:39:58.652275       1 shared_informer.go:318] Caches are synced for node config
	I0906 23:39:58.652315       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [156a1f97803d7a9b68644f727afe5ab7e053e8734aa5ced1cef930bef6ab3813] <==
	* W0906 23:39:29.773316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:39:29.773340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 23:39:29.773466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 23:39:29.773498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 23:39:30.584692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:30.584745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:30.654526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 23:39:30.654650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 23:39:30.699106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 23:39:30.699160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 23:39:30.829225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:39:30.829296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 23:39:30.892471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:30.892518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:30.970803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:39:30.970885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 23:39:31.015100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:31.015242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:31.030023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:39:31.030143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 23:39:31.088704       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 23:39:31.089115       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 23:39:31.090814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:39:31.090905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0906 23:39:33.364121       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:38:56 UTC, ends at Wed 2023-09-06 23:44:11 UTC. --
	Sep 06 23:44:00 addons-503456 kubelet[1257]: I0906 23:44:00.894819    1257 memory_manager.go:346] "RemoveStaleState removing state" podUID="10daf5d3-54e1-442a-9361-2fcdf67ab0d2" containerName="task-pv-container"
	Sep 06 23:44:00 addons-503456 kubelet[1257]: I0906 23:44:00.894825    1257 memory_manager.go:346] "RemoveStaleState removing state" podUID="4c8303db-6c3d-4d07-98a0-57123c31afe2" containerName="csi-attacher"
	Sep 06 23:44:00 addons-503456 kubelet[1257]: I0906 23:44:00.908358    1257 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9503445f-d043-4e1b-9cd0-0515175b9382-gcp-creds\") pod \"hello-world-app-5d77478584-5rd75\" (UID: \"9503445f-d043-4e1b-9cd0-0515175b9382\") " pod="default/hello-world-app-5d77478584-5rd75"
	Sep 06 23:44:00 addons-503456 kubelet[1257]: I0906 23:44:00.908501    1257 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzxb\" (UniqueName: \"kubernetes.io/projected/9503445f-d043-4e1b-9cd0-0515175b9382-kube-api-access-glzxb\") pod \"hello-world-app-5d77478584-5rd75\" (UID: \"9503445f-d043-4e1b-9cd0-0515175b9382\") " pod="default/hello-world-app-5d77478584-5rd75"
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.220004    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm94c\" (UniqueName: \"kubernetes.io/projected/ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6-kube-api-access-hm94c\") pod \"ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6\" (UID: \"ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6\") "
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.222540    1257 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6-kube-api-access-hm94c" (OuterVolumeSpecName: "kube-api-access-hm94c") pod "ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6" (UID: "ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6"). InnerVolumeSpecName "kube-api-access-hm94c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.320828    1257 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hm94c\" (UniqueName: \"kubernetes.io/projected/ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6-kube-api-access-hm94c\") on node \"addons-503456\" DevicePath \"\""
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.733472    1257 scope.go:117] "RemoveContainer" containerID="38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b"
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.765259    1257 scope.go:117] "RemoveContainer" containerID="38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b"
	Sep 06 23:44:02 addons-503456 kubelet[1257]: E0906 23:44:02.766366    1257 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b\": container with ID starting with 38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b not found: ID does not exist" containerID="38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b"
	Sep 06 23:44:02 addons-503456 kubelet[1257]: I0906 23:44:02.766425    1257 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b"} err="failed to get container status \"38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b\": rpc error: code = NotFound desc = could not find container \"38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b\": container with ID starting with 38666ca1e6181444941ada41611b462143cdbcef7ece32dab3696f7e21b1844b not found: ID does not exist"
	Sep 06 23:44:03 addons-503456 kubelet[1257]: I0906 23:44:03.257086    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6" path="/var/lib/kubelet/pods/ac53e12d-a2b6-40bb-84a9-eb3bad2ab0b6/volumes"
	Sep 06 23:44:05 addons-503456 kubelet[1257]: I0906 23:44:05.256187    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e3fee77-9944-48ae-bb9b-82a216c2ab19" path="/var/lib/kubelet/pods/4e3fee77-9944-48ae-bb9b-82a216c2ab19/volumes"
	Sep 06 23:44:05 addons-503456 kubelet[1257]: I0906 23:44:05.256755    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dd27e72d-ab29-4da2-8121-e5eb76cfd15d" path="/var/lib/kubelet/pods/dd27e72d-ab29-4da2-8121-e5eb76cfd15d/volumes"
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.764901    1257 scope.go:117] "RemoveContainer" containerID="8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799"
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.792411    1257 scope.go:117] "RemoveContainer" containerID="8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799"
	Sep 06 23:44:06 addons-503456 kubelet[1257]: E0906 23:44:06.793112    1257 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799\": container with ID starting with 8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799 not found: ID does not exist" containerID="8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799"
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.793172    1257 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799"} err="failed to get container status \"8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799\": rpc error: code = NotFound desc = could not find container \"8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799\": container with ID starting with 8a9c0dcb6f59fd7c99e073609353edbaca07b30223525e8b7c4e5b0df5187799 not found: ID does not exist"
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.858178    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-webhook-cert\") pod \"47e98e7a-3a62-4a4b-93ab-d86aae8c3287\" (UID: \"47e98e7a-3a62-4a4b-93ab-d86aae8c3287\") "
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.858327    1257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25z9\" (UniqueName: \"kubernetes.io/projected/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-kube-api-access-h25z9\") pod \"47e98e7a-3a62-4a4b-93ab-d86aae8c3287\" (UID: \"47e98e7a-3a62-4a4b-93ab-d86aae8c3287\") "
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.863323    1257 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "47e98e7a-3a62-4a4b-93ab-d86aae8c3287" (UID: "47e98e7a-3a62-4a4b-93ab-d86aae8c3287"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.863868    1257 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-kube-api-access-h25z9" (OuterVolumeSpecName: "kube-api-access-h25z9") pod "47e98e7a-3a62-4a4b-93ab-d86aae8c3287" (UID: "47e98e7a-3a62-4a4b-93ab-d86aae8c3287"). InnerVolumeSpecName "kube-api-access-h25z9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.959373    1257 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h25z9\" (UniqueName: \"kubernetes.io/projected/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-kube-api-access-h25z9\") on node \"addons-503456\" DevicePath \"\""
	Sep 06 23:44:06 addons-503456 kubelet[1257]: I0906 23:44:06.959410    1257 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47e98e7a-3a62-4a4b-93ab-d86aae8c3287-webhook-cert\") on node \"addons-503456\" DevicePath \"\""
	Sep 06 23:44:07 addons-503456 kubelet[1257]: I0906 23:44:07.257213    1257 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="47e98e7a-3a62-4a4b-93ab-d86aae8c3287" path="/var/lib/kubelet/pods/47e98e7a-3a62-4a4b-93ab-d86aae8c3287/volumes"
	
	* 
	* ==> storage-provisioner [16d54fe665a58a9d5f204a6f24cae10b39e1cdc8195765e5601aa3baabd24e16] <==
	* I0906 23:40:03.392997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:40:03.637671       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:40:03.640938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:40:03.688493       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:40:03.703924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-503456_db47bbcf-9f0e-457a-94e6-29f85d196e04!
	I0906 23:40:03.688790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5be2fa3b-e3e3-4f2d-b8e9-76306bd6601e", APIVersion:"v1", ResourceVersion:"810", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-503456_db47bbcf-9f0e-457a-94e6-29f85d196e04 became leader
	I0906 23:40:03.914707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-503456_db47bbcf-9f0e-457a-94e6-29f85d196e04!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-503456 -n addons-503456
helpers_test.go:261: (dbg) Run:  kubectl --context addons-503456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-503456
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-503456: exit status 82 (2m1.225853595s)

                                                
                                                
-- stdout --
	* Stopping node "addons-503456"  ...
	* Stopping node "addons-503456"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-503456" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-503456
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-503456: exit status 11 (21.704071473s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-503456" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-503456
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-503456: exit status 11 (6.144417032s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-503456" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-503456
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-503456: exit status 11 (6.143117906s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-503456" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (170.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-474162 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-474162 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.506001564s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-474162 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-474162 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cae909fa-18da-4683-b981-bfee5420863a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cae909fa-18da-4683-b981-bfee5420863a] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 14.016417497s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0906 23:56:17.594049   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:56:24.846928   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:24.852214   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:24.862459   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:24.882729   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:24.923035   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:25.003382   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:25.163805   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:25.484436   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:26.125360   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:27.405840   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:56:29.967753   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-474162 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.547282609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-474162 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.53
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons disable ingress-dns --alsologtostderr -v=1
E0906 23:56:35.088914   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons disable ingress-dns --alsologtostderr -v=1: (9.396049611s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons disable ingress --alsologtostderr -v=1
E0906 23:56:45.277653   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:56:45.329851   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons disable ingress --alsologtostderr -v=1: (7.540492443s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-474162 -n ingress-addon-legacy-474162
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-474162 logs -n 25: (1.058725857s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-000295                                                   | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-000295                                                   | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-000295 ssh findmnt                                          | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-000295 ssh findmnt                                          | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-000295 ssh findmnt                                          | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-000295 ssh findmnt                                          | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-000295                                                   | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:51 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-000295 ssh pgrep                                            | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-000295 image build -t                                       | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:51 UTC | 06 Sep 23 23:52 UTC |
	|                | localhost/my-image:functional-000295                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-000295 image ls                                             | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:52 UTC | 06 Sep 23 23:52 UTC |
	| image          | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:52 UTC | 06 Sep 23 23:52 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-000295                                                      | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:52 UTC | 06 Sep 23 23:52 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-000295                                                   | functional-000295           | jenkins | v1.31.2 | 06 Sep 23 23:52 UTC | 06 Sep 23 23:52 UTC |
	| start          | -p ingress-addon-legacy-474162                                         | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:52 UTC | 06 Sep 23 23:53 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-474162                                            | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:53 UTC | 06 Sep 23 23:54 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-474162                                            | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:54 UTC | 06 Sep 23 23:54 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-474162                                            | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:54 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-474162 ip                                         | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:56 UTC | 06 Sep 23 23:56 UTC |
	| addons         | ingress-addon-legacy-474162                                            | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:56 UTC | 06 Sep 23 23:56 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-474162                                            | ingress-addon-legacy-474162 | jenkins | v1.31.2 | 06 Sep 23 23:56 UTC | 06 Sep 23 23:56 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:52:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:52:19.854975   22116 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:52:19.855105   22116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:52:19.855116   22116 out.go:309] Setting ErrFile to fd 2...
	I0906 23:52:19.855123   22116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:52:19.855418   22116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0906 23:52:19.856200   22116 out.go:303] Setting JSON to false
	I0906 23:52:19.857338   22116 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2084,"bootTime":1694042256,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:52:19.857417   22116 start.go:138] virtualization: kvm guest
	I0906 23:52:19.859803   22116 out.go:177] * [ingress-addon-legacy-474162] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:52:19.861412   22116 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:52:19.861422   22116 notify.go:220] Checking for updates...
	I0906 23:52:19.862915   22116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:52:19.864287   22116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:52:19.865626   22116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:52:19.866959   22116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:52:19.868315   22116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:52:19.871109   22116 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:52:19.904865   22116 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 23:52:19.906126   22116 start.go:298] selected driver: kvm2
	I0906 23:52:19.906135   22116 start.go:902] validating driver "kvm2" against <nil>
	I0906 23:52:19.906145   22116 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:52:19.906849   22116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:52:19.906925   22116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:52:19.920635   22116 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:52:19.920693   22116 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 23:52:19.920900   22116 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 23:52:19.920940   22116 cni.go:84] Creating CNI manager for ""
	I0906 23:52:19.920955   22116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:52:19.920970   22116 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:52:19.920983   22116 start_flags.go:321] config:
	{Name:ingress-addon-legacy-474162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-474162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:52:19.921139   22116 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:52:19.923053   22116 out.go:177] * Starting control plane node ingress-addon-legacy-474162 in cluster ingress-addon-legacy-474162
	I0906 23:52:19.924324   22116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 23:52:20.429862   22116 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0906 23:52:20.429891   22116 cache.go:57] Caching tarball of preloaded images
	I0906 23:52:20.430058   22116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 23:52:20.431831   22116 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0906 23:52:20.433081   22116 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:52:20.569248   22116 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0906 23:52:34.276766   22116 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:52:34.276858   22116 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:52:35.210936   22116 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0906 23:52:35.211236   22116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/config.json ...
	I0906 23:52:35.211262   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/config.json: {Name:mk68352c4599001a8bc1dd0629d69cfc3a26a71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:52:35.211421   22116 start.go:365] acquiring machines lock for ingress-addon-legacy-474162: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 23:52:35.211461   22116 start.go:369] acquired machines lock for "ingress-addon-legacy-474162" in 18.5µs
	I0906 23:52:35.211478   22116 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-474162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-474162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:52:35.211560   22116 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 23:52:35.214598   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0906 23:52:35.214796   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:52:35.214853   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:52:35.229616   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0906 23:52:35.230023   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:52:35.230661   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:52:35.230683   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:52:35.231042   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:52:35.231263   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetMachineName
	I0906 23:52:35.231428   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:52:35.231575   22116 start.go:159] libmachine.API.Create for "ingress-addon-legacy-474162" (driver="kvm2")
	I0906 23:52:35.231599   22116 client.go:168] LocalClient.Create starting
	I0906 23:52:35.231641   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0906 23:52:35.231680   22116 main.go:141] libmachine: Decoding PEM data...
	I0906 23:52:35.231702   22116 main.go:141] libmachine: Parsing certificate...
	I0906 23:52:35.231773   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0906 23:52:35.231801   22116 main.go:141] libmachine: Decoding PEM data...
	I0906 23:52:35.231819   22116 main.go:141] libmachine: Parsing certificate...
	I0906 23:52:35.231845   22116 main.go:141] libmachine: Running pre-create checks...
	I0906 23:52:35.231860   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .PreCreateCheck
	I0906 23:52:35.232219   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetConfigRaw
	I0906 23:52:35.232584   22116 main.go:141] libmachine: Creating machine...
	I0906 23:52:35.232598   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Create
	I0906 23:52:35.232718   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Creating KVM machine...
	I0906 23:52:35.233776   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found existing default KVM network
	I0906 23:52:35.234455   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:35.234328   22172 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d780}
	I0906 23:52:35.240018   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | trying to create private KVM network mk-ingress-addon-legacy-474162 192.168.39.0/24...
	I0906 23:52:35.306047   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | private KVM network mk-ingress-addon-legacy-474162 192.168.39.0/24 created
	I0906 23:52:35.306074   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162 ...
	I0906 23:52:35.306090   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:35.306014   22172 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:52:35.306109   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0906 23:52:35.306164   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0906 23:52:35.502914   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:35.502801   22172 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa...
	I0906 23:52:35.791654   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:35.791526   22172 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/ingress-addon-legacy-474162.rawdisk...
	I0906 23:52:35.791691   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Writing magic tar header
	I0906 23:52:35.791708   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Writing SSH key tar header
	I0906 23:52:35.791724   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:35.791632   22172 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162 ...
	I0906 23:52:35.791738   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162
	I0906 23:52:35.791752   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162 (perms=drwx------)
	I0906 23:52:35.791764   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0906 23:52:35.791792   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:52:35.791808   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0906 23:52:35.791823   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0906 23:52:35.791839   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 23:52:35.791854   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0906 23:52:35.791873   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0906 23:52:35.791889   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 23:52:35.791904   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home/jenkins
	I0906 23:52:35.791934   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 23:52:35.791953   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Creating domain...
	I0906 23:52:35.791961   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Checking permissions on dir: /home
	I0906 23:52:35.791974   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Skipping /home - not owner
	I0906 23:52:35.793054   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) define libvirt domain using xml: 
	I0906 23:52:35.793081   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) <domain type='kvm'>
	I0906 23:52:35.793094   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <name>ingress-addon-legacy-474162</name>
	I0906 23:52:35.793120   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <memory unit='MiB'>4096</memory>
	I0906 23:52:35.793146   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <vcpu>2</vcpu>
	I0906 23:52:35.793167   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <features>
	I0906 23:52:35.793182   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <acpi/>
	I0906 23:52:35.793194   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <apic/>
	I0906 23:52:35.793220   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <pae/>
	I0906 23:52:35.793241   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     
	I0906 23:52:35.793248   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   </features>
	I0906 23:52:35.793256   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <cpu mode='host-passthrough'>
	I0906 23:52:35.793264   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   
	I0906 23:52:35.793273   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   </cpu>
	I0906 23:52:35.793280   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <os>
	I0906 23:52:35.793291   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <type>hvm</type>
	I0906 23:52:35.793297   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <boot dev='cdrom'/>
	I0906 23:52:35.793305   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <boot dev='hd'/>
	I0906 23:52:35.793323   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <bootmenu enable='no'/>
	I0906 23:52:35.793343   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   </os>
	I0906 23:52:35.793356   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   <devices>
	I0906 23:52:35.793369   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <disk type='file' device='cdrom'>
	I0906 23:52:35.793390   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/boot2docker.iso'/>
	I0906 23:52:35.793407   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <target dev='hdc' bus='scsi'/>
	I0906 23:52:35.793418   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <readonly/>
	I0906 23:52:35.793434   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </disk>
	I0906 23:52:35.793448   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <disk type='file' device='disk'>
	I0906 23:52:35.793464   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 23:52:35.793484   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/ingress-addon-legacy-474162.rawdisk'/>
	I0906 23:52:35.793497   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <target dev='hda' bus='virtio'/>
	I0906 23:52:35.793511   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </disk>
	I0906 23:52:35.793520   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <interface type='network'>
	I0906 23:52:35.793541   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <source network='mk-ingress-addon-legacy-474162'/>
	I0906 23:52:35.793553   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <model type='virtio'/>
	I0906 23:52:35.793566   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </interface>
	I0906 23:52:35.793582   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <interface type='network'>
	I0906 23:52:35.793597   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <source network='default'/>
	I0906 23:52:35.793609   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <model type='virtio'/>
	I0906 23:52:35.793623   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </interface>
	I0906 23:52:35.793635   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <serial type='pty'>
	I0906 23:52:35.793662   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <target port='0'/>
	I0906 23:52:35.793677   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </serial>
	I0906 23:52:35.793692   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <console type='pty'>
	I0906 23:52:35.793707   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <target type='serial' port='0'/>
	I0906 23:52:35.793722   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </console>
	I0906 23:52:35.793735   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     <rng model='virtio'>
	I0906 23:52:35.793751   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)       <backend model='random'>/dev/random</backend>
	I0906 23:52:35.793767   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     </rng>
	I0906 23:52:35.793781   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     
	I0906 23:52:35.793794   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)     
	I0906 23:52:35.793815   22116 main.go:141] libmachine: (ingress-addon-legacy-474162)   </devices>
	I0906 23:52:35.793827   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) </domain>
	I0906 23:52:35.793855   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) 
	I0906 23:52:35.798011   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:12:53:dc in network default
	I0906 23:52:35.798598   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Ensuring networks are active...
	I0906 23:52:35.798613   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:35.799209   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Ensuring network default is active
	I0906 23:52:35.799483   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Ensuring network mk-ingress-addon-legacy-474162 is active
	I0906 23:52:35.799952   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Getting domain xml...
	I0906 23:52:35.800543   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Creating domain...
	I0906 23:52:37.024550   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Waiting to get IP...
	I0906 23:52:37.025362   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.025801   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.025840   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:37.025786   22172 retry.go:31] will retry after 235.176233ms: waiting for machine to come up
	I0906 23:52:37.262219   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.262705   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.262733   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:37.262665   22172 retry.go:31] will retry after 317.369106ms: waiting for machine to come up
	I0906 23:52:37.581014   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.581424   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.581467   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:37.581379   22172 retry.go:31] will retry after 334.828631ms: waiting for machine to come up
	I0906 23:52:37.917817   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.918285   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:37.918310   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:37.918232   22172 retry.go:31] will retry after 381.711835ms: waiting for machine to come up
	I0906 23:52:38.301848   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:38.302291   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:38.302312   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:38.302252   22172 retry.go:31] will retry after 726.564707ms: waiting for machine to come up
	I0906 23:52:39.030047   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:39.030453   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:39.030479   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:39.030401   22172 retry.go:31] will retry after 699.350086ms: waiting for machine to come up
	I0906 23:52:39.731248   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:39.731636   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:39.731669   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:39.731584   22172 retry.go:31] will retry after 909.525967ms: waiting for machine to come up
	I0906 23:52:40.642282   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:40.642654   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:40.642682   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:40.642617   22172 retry.go:31] will retry after 929.228491ms: waiting for machine to come up
	I0906 23:52:41.573621   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:41.574071   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:41.574100   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:41.573991   22172 retry.go:31] will retry after 1.403381264s: waiting for machine to come up
	I0906 23:52:42.978448   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:42.978844   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:42.978873   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:42.978805   22172 retry.go:31] will retry after 2.079021255s: waiting for machine to come up
	I0906 23:52:45.060120   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:45.060514   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:45.060544   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:45.060475   22172 retry.go:31] will retry after 1.925362655s: waiting for machine to come up
	I0906 23:52:46.987228   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:46.987771   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:46.987797   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:46.987722   22172 retry.go:31] will retry after 2.934771338s: waiting for machine to come up
	I0906 23:52:49.925411   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:49.925844   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:49.925870   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:49.925803   22172 retry.go:31] will retry after 3.419698907s: waiting for machine to come up
	I0906 23:52:53.347895   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:53.348431   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find current IP address of domain ingress-addon-legacy-474162 in network mk-ingress-addon-legacy-474162
	I0906 23:52:53.348462   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | I0906 23:52:53.348387   22172 retry.go:31] will retry after 3.598853455s: waiting for machine to come up
	I0906 23:52:56.949550   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:56.950072   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Found IP for machine: 192.168.39.53
	I0906 23:52:56.950102   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has current primary IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:56.950114   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Reserving static IP address...
	I0906 23:52:56.950532   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-474162", mac: "52:54:00:82:17:76", ip: "192.168.39.53"} in network mk-ingress-addon-legacy-474162
	I0906 23:52:57.020620   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Reserved static IP address: 192.168.39.53
	I0906 23:52:57.020647   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Getting to WaitForSSH function...
	I0906 23:52:57.020658   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Waiting for SSH to be available...
	I0906 23:52:57.023446   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:52:57.023761   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162
	I0906 23:52:57.023786   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-474162 interface with MAC address 52:54:00:82:17:76
	I0906 23:52:57.023943   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Using SSH client type: external
	I0906 23:52:57.023962   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa (-rw-------)
	I0906 23:52:57.024046   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:52:57.024066   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | About to run SSH command:
	I0906 23:52:57.024081   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | exit 0
	I0906 23:52:57.027782   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | SSH cmd err, output: exit status 255: 
	I0906 23:52:57.027802   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 23:52:57.027811   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | command : exit 0
	I0906 23:52:57.027817   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | err     : exit status 255
	I0906 23:52:57.027913   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | output  : 
	I0906 23:53:00.029989   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Getting to WaitForSSH function...
	I0906 23:53:00.032490   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.032967   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.032999   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.033107   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Using SSH client type: external
	I0906 23:53:00.033140   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa (-rw-------)
	I0906 23:53:00.033168   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:53:00.033187   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | About to run SSH command:
	I0906 23:53:00.033200   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | exit 0
	I0906 23:53:00.122539   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | SSH cmd err, output: <nil>: 
	I0906 23:53:00.122863   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) KVM machine creation complete!
	I0906 23:53:00.123139   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetConfigRaw
	I0906 23:53:00.123747   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:00.123943   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:00.124109   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 23:53:00.124131   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetState
	I0906 23:53:00.125421   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 23:53:00.125435   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 23:53:00.125441   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 23:53:00.125448   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.127701   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.128051   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.128087   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.128193   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.128404   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.128565   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.128719   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.128880   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:00.129283   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:00.129295   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 23:53:00.241795   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:53:00.241814   22116 main.go:141] libmachine: Detecting the provisioner...
	I0906 23:53:00.241821   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.244442   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.244767   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.244801   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.244971   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.245147   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.245312   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.245496   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.245685   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:00.246123   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:00.246137   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 23:53:00.363326   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0906 23:53:00.363390   22116 main.go:141] libmachine: found compatible host: buildroot
	I0906 23:53:00.363401   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0906 23:53:00.363416   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetMachineName
	I0906 23:53:00.363637   22116 buildroot.go:166] provisioning hostname "ingress-addon-legacy-474162"
	I0906 23:53:00.363662   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetMachineName
	I0906 23:53:00.363835   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.366381   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.366737   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.366799   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.366947   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.367131   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.367268   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.367357   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.367512   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:00.367891   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:00.367904   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-474162 && echo "ingress-addon-legacy-474162" | sudo tee /etc/hostname
	I0906 23:53:00.495195   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-474162
	
	I0906 23:53:00.495223   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.497858   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.498220   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.498252   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.498371   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.498571   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.498721   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.498884   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.499022   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:00.499499   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:00.499522   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-474162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-474162/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-474162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 23:53:00.623095   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:53:00.623122   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0906 23:53:00.623145   22116 buildroot.go:174] setting up certificates
	I0906 23:53:00.623157   22116 provision.go:83] configureAuth start
	I0906 23:53:00.623170   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetMachineName
	I0906 23:53:00.623446   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetIP
	I0906 23:53:00.625988   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.626349   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.626377   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.626523   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.628445   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.628833   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.628861   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.629080   22116 provision.go:138] copyHostCerts
	I0906 23:53:00.629121   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0906 23:53:00.629160   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0906 23:53:00.629172   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0906 23:53:00.629256   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0906 23:53:00.629330   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0906 23:53:00.629346   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0906 23:53:00.629350   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0906 23:53:00.629375   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0906 23:53:00.629417   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0906 23:53:00.629431   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0906 23:53:00.629437   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0906 23:53:00.629456   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0906 23:53:00.629510   22116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-474162 san=[192.168.39.53 192.168.39.53 localhost 127.0.0.1 minikube ingress-addon-legacy-474162]
	I0906 23:53:00.757140   22116 provision.go:172] copyRemoteCerts
	I0906 23:53:00.757187   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 23:53:00.757208   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.759762   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.760040   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.760075   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.760217   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.760373   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.760590   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.760756   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:00.848360   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 23:53:00.848438   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 23:53:00.871576   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 23:53:00.871652   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0906 23:53:00.894335   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 23:53:00.894402   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 23:53:00.917174   22116 provision.go:86] duration metric: configureAuth took 294.005133ms
	I0906 23:53:00.917200   22116 buildroot.go:189] setting minikube options for container-runtime
	I0906 23:53:00.917366   22116 config.go:182] Loaded profile config "ingress-addon-legacy-474162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0906 23:53:00.917449   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:00.919924   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.920352   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:00.920388   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:00.920598   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:00.920771   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.920926   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:00.921060   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:00.921192   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:00.921793   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:00.921813   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 23:53:01.220961   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 23:53:01.220997   22116 main.go:141] libmachine: Checking connection to Docker...
	I0906 23:53:01.221010   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetURL
	I0906 23:53:01.222178   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Using libvirt version 6000000
	I0906 23:53:01.224367   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.224732   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.224758   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.224894   22116 main.go:141] libmachine: Docker is up and running!
	I0906 23:53:01.224906   22116 main.go:141] libmachine: Reticulating splines...
	I0906 23:53:01.224920   22116 client.go:171] LocalClient.Create took 25.993303508s
	I0906 23:53:01.224939   22116 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-474162" took 25.993365535s
	I0906 23:53:01.224950   22116 start.go:300] post-start starting for "ingress-addon-legacy-474162" (driver="kvm2")
	I0906 23:53:01.224959   22116 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 23:53:01.224981   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:01.225236   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 23:53:01.225266   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:01.227254   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.227659   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.227698   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.227857   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:01.228081   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:01.228266   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:01.228406   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:01.316067   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 23:53:01.320326   22116 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 23:53:01.320353   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0906 23:53:01.320427   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0906 23:53:01.320513   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0906 23:53:01.320524   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0906 23:53:01.320615   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 23:53:01.328863   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0906 23:53:01.352202   22116 start.go:303] post-start completed in 127.239829ms
	I0906 23:53:01.352252   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetConfigRaw
	I0906 23:53:01.352862   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetIP
	I0906 23:53:01.355556   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.355962   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.355991   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.356263   22116 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/config.json ...
	I0906 23:53:01.356475   22116 start.go:128] duration metric: createHost completed in 26.144906394s
	I0906 23:53:01.356498   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:01.358558   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.358846   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.358887   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.359059   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:01.359227   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:01.359381   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:01.359537   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:01.359738   22116 main.go:141] libmachine: Using SSH client type: native
	I0906 23:53:01.360120   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0906 23:53:01.360133   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 23:53:01.479806   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694044381.463975319
	
	I0906 23:53:01.479832   22116 fix.go:206] guest clock: 1694044381.463975319
	I0906 23:53:01.479846   22116 fix.go:219] Guest: 2023-09-06 23:53:01.463975319 +0000 UTC Remote: 2023-09-06 23:53:01.356487001 +0000 UTC m=+41.546957269 (delta=107.488318ms)
	I0906 23:53:01.479870   22116 fix.go:190] guest clock delta is within tolerance: 107.488318ms
	I0906 23:53:01.479876   22116 start.go:83] releasing machines lock for "ingress-addon-legacy-474162", held for 26.268404654s
	I0906 23:53:01.479897   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:01.480131   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetIP
	I0906 23:53:01.482923   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.483262   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.483283   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.483409   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:01.483853   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:01.484035   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:01.484107   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 23:53:01.484153   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:01.484213   22116 ssh_runner.go:195] Run: cat /version.json
	I0906 23:53:01.484236   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:01.486688   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.487031   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.487111   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.487162   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.487270   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:01.487467   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:01.487531   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:01.487561   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:01.487588   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:01.487709   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:01.487780   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:01.487870   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:01.488024   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:01.488143   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:01.593173   22116 ssh_runner.go:195] Run: systemctl --version
	I0906 23:53:01.598936   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 23:53:01.762791   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 23:53:01.768899   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 23:53:01.768976   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 23:53:01.784749   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 23:53:01.784770   22116 start.go:466] detecting cgroup driver to use...
	I0906 23:53:01.784837   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 23:53:01.800848   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 23:53:01.812576   22116 docker.go:196] disabling cri-docker service (if available) ...
	I0906 23:53:01.812632   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 23:53:01.824474   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 23:53:01.836734   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 23:53:01.944499   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 23:53:02.074677   22116 docker.go:212] disabling docker service ...
	I0906 23:53:02.074740   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 23:53:02.088134   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 23:53:02.099100   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 23:53:02.212588   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 23:53:02.327913   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 23:53:02.341038   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 23:53:02.358831   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 23:53:02.358905   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:53:02.368268   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 23:53:02.368338   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:53:02.377544   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:53:02.386968   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:53:02.396217   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 23:53:02.405400   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 23:53:02.413292   22116 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 23:53:02.413354   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 23:53:02.424674   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 23:53:02.434086   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:53:02.547377   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 23:53:02.710568   22116 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 23:53:02.710642   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 23:53:02.716086   22116 start.go:534] Will wait 60s for crictl version
	I0906 23:53:02.716135   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:02.719915   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 23:53:02.749318   22116 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0906 23:53:02.749396   22116 ssh_runner.go:195] Run: crio --version
	I0906 23:53:02.796251   22116 ssh_runner.go:195] Run: crio --version
	I0906 23:53:02.847746   22116 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0906 23:53:02.849347   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetIP
	I0906 23:53:02.852043   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:02.852385   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:02.852416   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:02.852674   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 23:53:02.857137   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:53:02.870322   22116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 23:53:02.870387   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:53:02.897399   22116 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0906 23:53:02.897463   22116 ssh_runner.go:195] Run: which lz4
	I0906 23:53:02.901603   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0906 23:53:02.901691   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 23:53:02.906014   22116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 23:53:02.906045   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0906 23:53:04.777274   22116 crio.go:444] Took 1.875606 seconds to copy over tarball
	I0906 23:53:04.777343   22116 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 23:53:08.021423   22116 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244052847s)
	I0906 23:53:08.021452   22116 crio.go:451] Took 3.244150 seconds to extract the tarball
	I0906 23:53:08.021461   22116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 23:53:08.064347   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:53:08.110739   22116 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0906 23:53:08.110761   22116 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 23:53:08.110822   22116 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:53:08.110855   22116 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 23:53:08.110914   22116 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 23:53:08.110931   22116 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 23:53:08.111041   22116 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 23:53:08.111064   22116 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0906 23:53:08.111102   22116 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0906 23:53:08.111092   22116 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0906 23:53:08.112166   22116 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 23:53:08.112200   22116 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0906 23:53:08.112263   22116 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 23:53:08.112171   22116 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0906 23:53:08.112173   22116 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 23:53:08.112335   22116 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 23:53:08.112379   22116 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:53:08.112437   22116 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 23:53:08.307428   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0906 23:53:08.349796   22116 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0906 23:53:08.349838   22116 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0906 23:53:08.349888   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.354123   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0906 23:53:08.378969   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 23:53:08.383919   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0906 23:53:08.397236   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0906 23:53:08.401706   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0906 23:53:08.413902   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0906 23:53:08.414653   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0906 23:53:08.417568   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 23:53:08.452920   22116 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 23:53:08.452952   22116 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 23:53:08.452985   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.526175   22116 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0906 23:53:08.526219   22116 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 23:53:08.526264   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.536506   22116 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0906 23:53:08.536558   22116 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 23:53:08.536553   22116 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0906 23:53:08.536590   22116 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 23:53:08.536612   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.536641   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.550746   22116 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0906 23:53:08.550802   22116 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0906 23:53:08.550834   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 23:53:08.550841   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.550871   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0906 23:53:08.550886   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0906 23:53:08.550903   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0906 23:53:08.550755   22116 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0906 23:53:08.550939   22116 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 23:53:08.550973   22116 ssh_runner.go:195] Run: which crictl
	I0906 23:53:08.604694   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0906 23:53:08.604735   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 23:53:08.604783   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 23:53:08.621903   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0906 23:53:08.621972   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0906 23:53:08.634421   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0906 23:53:08.670095   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0906 23:53:08.670302   22116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0906 23:53:09.053022   22116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:53:09.190436   22116 cache_images.go:92] LoadImages completed in 1.07965946s
	W0906 23:53:09.190542   22116 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0906 23:53:09.190632   22116 ssh_runner.go:195] Run: crio config
	I0906 23:53:09.248376   22116 cni.go:84] Creating CNI manager for ""
	I0906 23:53:09.248402   22116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:53:09.248421   22116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 23:53:09.248445   22116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-474162 NodeName:ingress-addon-legacy-474162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 23:53:09.248621   22116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-474162"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 23:53:09.248708   22116 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-474162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-474162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 23:53:09.248778   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0906 23:53:09.257548   22116 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 23:53:09.257610   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 23:53:09.265869   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0906 23:53:09.281050   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0906 23:53:09.296921   22116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0906 23:53:09.312177   22116 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0906 23:53:09.316254   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:53:09.328615   22116 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162 for IP: 192.168.39.53
	I0906 23:53:09.328644   22116 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:09.328800   22116 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0906 23:53:09.328851   22116 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0906 23:53:09.328906   22116 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key
	I0906 23:53:09.328918   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt with IP's: []
	I0906 23:53:09.430067   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt ...
	I0906 23:53:09.430091   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: {Name:mk2236e94a86e0484ff6dc346c0e62cc95b776d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:09.430270   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key ...
	I0906 23:53:09.430285   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key: {Name:mk7f9648c598c80801ae019b68120821d402badb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:09.430394   22116 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key.52e6c991
	I0906 23:53:09.430416   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt.52e6c991 with IP's: [192.168.39.53 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 23:53:09.780918   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt.52e6c991 ...
	I0906 23:53:09.780944   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt.52e6c991: {Name:mk3d960b6ade6ca145c5cb2cf7e73699bd673215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:09.781109   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key.52e6c991 ...
	I0906 23:53:09.781121   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key.52e6c991: {Name:mk7a4ca0486dbb3c9bcb4363d6692f14ab1b37ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:09.781218   22116 certs.go:337] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt.52e6c991 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt
	I0906 23:53:09.781318   22116 certs.go:341] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key.52e6c991 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key
	I0906 23:53:09.781382   22116 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.key
	I0906 23:53:09.781409   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.crt with IP's: []
	I0906 23:53:10.004465   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.crt ...
	I0906 23:53:10.004493   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.crt: {Name:mk114cc9c4dc64919406608ece3c31e0520415e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:10.004674   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.key ...
	I0906 23:53:10.004697   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.key: {Name:mk1a4f9ebbdf6abe65cbca256a0793b2551b77e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:10.004796   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 23:53:10.004822   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 23:53:10.004842   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 23:53:10.004860   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 23:53:10.004876   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 23:53:10.004888   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 23:53:10.004901   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 23:53:10.004920   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 23:53:10.004993   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0906 23:53:10.005043   22116 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0906 23:53:10.005059   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 23:53:10.005100   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0906 23:53:10.005132   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0906 23:53:10.005166   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0906 23:53:10.005224   22116 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0906 23:53:10.005271   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0906 23:53:10.005296   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0906 23:53:10.005314   22116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:53:10.005876   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 23:53:10.031075   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 23:53:10.054762   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 23:53:10.077034   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 23:53:10.100009   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 23:53:10.123604   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 23:53:10.146829   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 23:53:10.169825   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 23:53:10.193479   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0906 23:53:10.216353   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0906 23:53:10.239530   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 23:53:10.262231   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 23:53:10.278091   22116 ssh_runner.go:195] Run: openssl version
	I0906 23:53:10.283663   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0906 23:53:10.293297   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0906 23:53:10.297660   22116 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0906 23:53:10.297702   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0906 23:53:10.302974   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0906 23:53:10.311987   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0906 23:53:10.321101   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0906 23:53:10.325574   22116 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0906 23:53:10.325607   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0906 23:53:10.330983   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 23:53:10.339881   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 23:53:10.348686   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:53:10.353133   22116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:53:10.353175   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:53:10.358357   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 23:53:10.367514   22116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 23:53:10.371377   22116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 23:53:10.371419   22116 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-474162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-474162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:53:10.371504   22116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 23:53:10.371546   22116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 23:53:10.400385   22116 cri.go:89] found id: ""
	I0906 23:53:10.400486   22116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 23:53:10.408941   22116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 23:53:10.417120   22116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 23:53:10.425288   22116 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 23:53:10.425332   22116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0906 23:53:10.475066   22116 kubeadm.go:322] W0906 23:53:10.468685     960 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 23:53:10.601766   22116 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 23:53:13.869335   22116 kubeadm.go:322] W0906 23:53:13.864330     960 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 23:53:13.871183   22116 kubeadm.go:322] W0906 23:53:13.866277     960 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 23:53:23.449477   22116 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0906 23:53:23.449553   22116 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 23:53:23.449642   22116 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 23:53:23.449780   22116 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 23:53:23.449947   22116 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 23:53:23.450106   22116 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 23:53:23.450228   22116 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 23:53:23.450289   22116 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 23:53:23.450380   22116 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 23:53:23.452039   22116 out.go:204]   - Generating certificates and keys ...
	I0906 23:53:23.452141   22116 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 23:53:23.452229   22116 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 23:53:23.452326   22116 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 23:53:23.452403   22116 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 23:53:23.452492   22116 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 23:53:23.452565   22116 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 23:53:23.452640   22116 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 23:53:23.452831   22116 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-474162 localhost] and IPs [192.168.39.53 127.0.0.1 ::1]
	I0906 23:53:23.452934   22116 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 23:53:23.453134   22116 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-474162 localhost] and IPs [192.168.39.53 127.0.0.1 ::1]
	I0906 23:53:23.453228   22116 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 23:53:23.453308   22116 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 23:53:23.453365   22116 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 23:53:23.453422   22116 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 23:53:23.453495   22116 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 23:53:23.453562   22116 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 23:53:23.453691   22116 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 23:53:23.453782   22116 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 23:53:23.453890   22116 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 23:53:23.455387   22116 out.go:204]   - Booting up control plane ...
	I0906 23:53:23.455498   22116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 23:53:23.455607   22116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 23:53:23.455713   22116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 23:53:23.455843   22116 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 23:53:23.456014   22116 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 23:53:23.456089   22116 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002842 seconds
	I0906 23:53:23.456214   22116 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 23:53:23.456390   22116 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 23:53:23.456444   22116 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 23:53:23.456581   22116 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-474162 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0906 23:53:23.456668   22116 kubeadm.go:322] [bootstrap-token] Using token: v80h3m.i3a84n1l59snawy9
	I0906 23:53:23.458259   22116 out.go:204]   - Configuring RBAC rules ...
	I0906 23:53:23.458401   22116 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 23:53:23.458544   22116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 23:53:23.458733   22116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 23:53:23.458925   22116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 23:53:23.459101   22116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 23:53:23.459217   22116 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 23:53:23.459398   22116 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 23:53:23.459457   22116 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 23:53:23.459530   22116 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 23:53:23.459541   22116 kubeadm.go:322] 
	I0906 23:53:23.459617   22116 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 23:53:23.459626   22116 kubeadm.go:322] 
	I0906 23:53:23.459736   22116 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 23:53:23.459746   22116 kubeadm.go:322] 
	I0906 23:53:23.459781   22116 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 23:53:23.459871   22116 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 23:53:23.459978   22116 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 23:53:23.459999   22116 kubeadm.go:322] 
	I0906 23:53:23.460048   22116 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 23:53:23.460140   22116 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 23:53:23.460233   22116 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 23:53:23.460248   22116 kubeadm.go:322] 
	I0906 23:53:23.460371   22116 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 23:53:23.460474   22116 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 23:53:23.460488   22116 kubeadm.go:322] 
	I0906 23:53:23.460599   22116 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v80h3m.i3a84n1l59snawy9 \
	I0906 23:53:23.460748   22116 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0906 23:53:23.460782   22116 kubeadm.go:322]     --control-plane 
	I0906 23:53:23.460791   22116 kubeadm.go:322] 
	I0906 23:53:23.460901   22116 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 23:53:23.460917   22116 kubeadm.go:322] 
	I0906 23:53:23.461044   22116 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v80h3m.i3a84n1l59snawy9 \
	I0906 23:53:23.461203   22116 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0906 23:53:23.461215   22116 cni.go:84] Creating CNI manager for ""
	I0906 23:53:23.461226   22116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:53:23.462826   22116 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 23:53:23.464000   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 23:53:23.483467   22116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 23:53:23.501420   22116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 23:53:23.501512   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:23.501518   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=ingress-addon-legacy-474162 minikube.k8s.io/updated_at=2023_09_06T23_53_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:23.538731   22116 ops.go:34] apiserver oom_adj: -16
	I0906 23:53:23.768113   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:23.969894   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:24.570913   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:25.070809   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:25.570746   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:26.070920   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:26.571324   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:27.071244   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:27.571199   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:28.070428   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:28.571241   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:29.070555   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:29.570676   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:30.070865   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:30.570597   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:31.071077   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:31.570389   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:32.070870   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:32.570422   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:33.071070   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:33.570641   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:34.070681   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:34.570852   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:35.070311   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:35.571280   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:36.071142   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:36.570608   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:37.071275   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:37.570743   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:53:37.749992   22116 kubeadm.go:1081] duration metric: took 14.248542099s to wait for elevateKubeSystemPrivileges.
	I0906 23:53:37.750041   22116 kubeadm.go:406] StartCluster complete in 27.378623017s
	I0906 23:53:37.750061   22116 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:37.750181   22116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:53:37.751177   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:53:37.751824   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 23:53:37.751961   22116 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 23:53:37.752065   22116 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-474162"
	I0906 23:53:37.752068   22116 config.go:182] Loaded profile config "ingress-addon-legacy-474162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0906 23:53:37.752109   22116 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-474162"
	I0906 23:53:37.752178   22116 host.go:66] Checking if "ingress-addon-legacy-474162" exists ...
	I0906 23:53:37.752110   22116 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-474162"
	I0906 23:53:37.752218   22116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-474162"
	I0906 23:53:37.752565   22116 kapi.go:59] client config for ingress-addon-legacy-474162: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 23:53:37.752691   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:53:37.752712   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:53:37.752691   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:53:37.752815   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:53:37.753330   22116 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 23:53:37.768162   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0906 23:53:37.768553   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:53:37.768700   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0906 23:53:37.769171   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:53:37.769208   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:53:37.769268   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:53:37.769670   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:53:37.769690   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:53:37.769705   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:53:37.770018   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:53:37.770177   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetState
	I0906 23:53:37.770268   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:53:37.770301   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:53:37.772670   22116 kapi.go:59] client config for ingress-addon-legacy-474162: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 23:53:37.784508   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0906 23:53:37.784881   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:53:37.785416   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:53:37.785440   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:53:37.785741   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:53:37.785925   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetState
	I0906 23:53:37.787477   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:37.789768   22116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:53:37.791114   22116 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:53:37.791135   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 23:53:37.791154   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:37.794212   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:37.794705   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:37.794740   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:37.794913   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:37.795104   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:37.795266   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:37.795448   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:37.800723   22116 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-474162"
	I0906 23:53:37.800761   22116 host.go:66] Checking if "ingress-addon-legacy-474162" exists ...
	I0906 23:53:37.801026   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:53:37.801062   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:53:37.815172   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0906 23:53:37.815618   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:53:37.816166   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:53:37.816203   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:53:37.816508   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:53:37.817228   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:53:37.817274   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:53:37.820413   22116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-474162" context rescaled to 1 replicas
	I0906 23:53:37.820451   22116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:53:37.821848   22116 out.go:177] * Verifying Kubernetes components...
	I0906 23:53:37.823192   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:53:37.831613   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0906 23:53:37.832060   22116 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:53:37.832549   22116 main.go:141] libmachine: Using API Version  1
	I0906 23:53:37.832567   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:53:37.832859   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:53:37.833036   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetState
	I0906 23:53:37.834565   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .DriverName
	I0906 23:53:37.834871   22116 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 23:53:37.834887   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 23:53:37.834901   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHHostname
	I0906 23:53:37.837979   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:37.838452   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:17:76", ip: ""} in network mk-ingress-addon-legacy-474162: {Iface:virbr1 ExpiryTime:2023-09-07 00:52:51 +0000 UTC Type:0 Mac:52:54:00:82:17:76 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ingress-addon-legacy-474162 Clientid:01:52:54:00:82:17:76}
	I0906 23:53:37.838487   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | domain ingress-addon-legacy-474162 has defined IP address 192.168.39.53 and MAC address 52:54:00:82:17:76 in network mk-ingress-addon-legacy-474162
	I0906 23:53:37.838739   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHPort
	I0906 23:53:37.838954   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHKeyPath
	I0906 23:53:37.839138   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .GetSSHUsername
	I0906 23:53:37.839319   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/ingress-addon-legacy-474162/id_rsa Username:docker}
	I0906 23:53:37.905178   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 23:53:37.905588   22116 kapi.go:59] client config for ingress-addon-legacy-474162: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 23:53:37.905919   22116 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-474162" to be "Ready" ...
	I0906 23:53:37.908748   22116 node_ready.go:49] node "ingress-addon-legacy-474162" has status "Ready":"True"
	I0906 23:53:37.908767   22116 node_ready.go:38] duration metric: took 2.826909ms waiting for node "ingress-addon-legacy-474162" to be "Ready" ...
	I0906 23:53:37.908780   22116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:53:37.919120   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.929958   22116 pod_ready.go:92] pod "etcd-ingress-addon-legacy-474162" in "kube-system" namespace has status "Ready":"True"
	I0906 23:53:37.929978   22116 pod_ready.go:81] duration metric: took 10.833851ms waiting for pod "etcd-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.929993   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.943554   22116 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-474162" in "kube-system" namespace has status "Ready":"True"
	I0906 23:53:37.943578   22116 pod_ready.go:81] duration metric: took 13.577827ms waiting for pod "kube-apiserver-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.943591   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.966763   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:53:37.969723   22116 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-474162" in "kube-system" namespace has status "Ready":"True"
	I0906 23:53:37.969745   22116 pod_ready.go:81] duration metric: took 26.14683ms waiting for pod "kube-controller-manager-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.969760   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.975054   22116 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-474162" in "kube-system" namespace has status "Ready":"True"
	I0906 23:53:37.975076   22116 pod_ready.go:81] duration metric: took 5.308129ms waiting for pod "kube-scheduler-ingress-addon-legacy-474162" in "kube-system" namespace to be "Ready" ...
	I0906 23:53:37.975085   22116 pod_ready.go:38] duration metric: took 66.294681ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:53:37.975104   22116 api_server.go:52] waiting for apiserver process to appear ...
	I0906 23:53:37.975157   22116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 23:53:37.993590   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 23:53:38.848328   22116 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 23:53:38.948882   22116 main.go:141] libmachine: Making call to close driver server
	I0906 23:53:38.948916   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Close
	I0906 23:53:38.948951   22116 api_server.go:72] duration metric: took 1.1284538s to wait for apiserver process to appear ...
	I0906 23:53:38.948974   22116 api_server.go:88] waiting for apiserver healthz status ...
	I0906 23:53:38.948996   22116 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0906 23:53:38.949035   22116 main.go:141] libmachine: Making call to close driver server
	I0906 23:53:38.949053   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Close
	I0906 23:53:38.949245   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:53:38.949307   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Closing plugin on server side
	I0906 23:53:38.949321   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:53:38.949333   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:53:38.949345   22116 main.go:141] libmachine: Making call to close driver server
	I0906 23:53:38.949355   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Close
	I0906 23:53:38.949353   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Closing plugin on server side
	I0906 23:53:38.949406   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:53:38.949421   22116 main.go:141] libmachine: Making call to close driver server
	I0906 23:53:38.949432   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Close
	I0906 23:53:38.949738   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Closing plugin on server side
	I0906 23:53:38.949807   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:53:38.949852   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:53:38.950959   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:53:38.950976   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:53:38.950988   22116 main.go:141] libmachine: Making call to close driver server
	I0906 23:53:38.950998   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) Calling .Close
	I0906 23:53:38.951208   22116 main.go:141] libmachine: (ingress-addon-legacy-474162) DBG | Closing plugin on server side
	I0906 23:53:38.951225   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:53:38.951237   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:53:38.953064   22116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 23:53:38.954892   22116 addons.go:502] enable addons completed in 1.202929674s: enabled=[storage-provisioner default-storageclass]
	I0906 23:53:38.962820   22116 api_server.go:279] https://192.168.39.53:8443/healthz returned 200:
	ok
	I0906 23:53:38.965102   22116 api_server.go:141] control plane version: v1.18.20
	I0906 23:53:38.965125   22116 api_server.go:131] duration metric: took 16.134633ms to wait for apiserver health ...
	I0906 23:53:38.965132   22116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 23:53:38.977063   22116 system_pods.go:59] 7 kube-system pods found
	I0906 23:53:38.977090   22116 system_pods.go:61] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0906 23:53:38.977098   22116 system_pods.go:61] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:38.977103   22116 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:38.977107   22116 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:38.977113   22116 system_pods.go:61] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 23:53:38.977122   22116 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:38.977133   22116 system_pods.go:61] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending
	I0906 23:53:38.977138   22116 system_pods.go:74] duration metric: took 11.99858ms to wait for pod list to return data ...
	I0906 23:53:38.977147   22116 default_sa.go:34] waiting for default service account to be created ...
	I0906 23:53:38.980868   22116 default_sa.go:45] found service account: "default"
	I0906 23:53:38.980884   22116 default_sa.go:55] duration metric: took 3.732861ms for default service account to be created ...
	I0906 23:53:38.980890   22116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 23:53:38.990149   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:38.990170   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0906 23:53:38.990178   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:38.990183   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:38.990187   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:38.990193   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 23:53:38.990197   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:38.990203   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:38.990219   22116 retry.go:31] will retry after 212.82481ms: missing components: kube-dns, kube-proxy
	I0906 23:53:39.212341   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:39.212366   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0906 23:53:39.212373   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:39.212379   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:39.212389   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:39.212395   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 23:53:39.212399   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:39.212404   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:39.212418   22116 retry.go:31] will retry after 321.856318ms: missing components: kube-dns, kube-proxy
	I0906 23:53:39.542195   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:39.542221   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0906 23:53:39.542229   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:39.542234   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:39.542239   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:39.542244   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 23:53:39.542249   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:39.542256   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:39.542274   22116 retry.go:31] will retry after 415.554326ms: missing components: kube-dns, kube-proxy
	I0906 23:53:39.964593   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:39.964619   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0906 23:53:39.964625   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:39.964631   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:39.964635   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:39.964639   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:39.964643   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:39.964650   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:39.964664   22116 retry.go:31] will retry after 537.923164ms: missing components: kube-dns
	I0906 23:53:40.510835   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:40.510867   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:53:40.510877   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:40.510888   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:40.510899   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:40.510906   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:40.510913   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:40.510923   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:40.510941   22116 retry.go:31] will retry after 625.633231ms: missing components: kube-dns
	I0906 23:53:41.143814   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:41.143846   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:53:41.143856   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:41.143865   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:41.143873   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:41.143880   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:41.143887   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:41.143898   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:53:41.143914   22116 retry.go:31] will retry after 784.774041ms: missing components: kube-dns
	I0906 23:53:41.936705   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:41.936745   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:53:41.936755   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:41.936764   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:41.936771   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:41.936776   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:41.936782   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:41.936789   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Running
	I0906 23:53:41.936806   22116 retry.go:31] will retry after 890.561454ms: missing components: kube-dns
	I0906 23:53:42.835066   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:42.835096   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:53:42.835103   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:42.835108   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:42.835115   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:42.835120   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:42.835124   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:42.835130   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Running
	I0906 23:53:42.835143   22116 retry.go:31] will retry after 1.133904748s: missing components: kube-dns
	I0906 23:53:43.975521   22116 system_pods.go:86] 7 kube-system pods found
	I0906 23:53:43.975550   22116 system_pods.go:89] "coredns-66bff467f8-89cxc" [0a812a2a-1318-42f1-be9e-f78873a7c88d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:53:43.975557   22116 system_pods.go:89] "etcd-ingress-addon-legacy-474162" [e01ef37d-ac49-4be7-9d5e-8ad828b18f0e] Running
	I0906 23:53:43.975566   22116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-474162" [c202dbc7-b7ce-45b2-aa9a-5256f9bb8ea9] Running
	I0906 23:53:43.975570   22116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-474162" [7b168212-3871-46ae-a0a2-f71e9c69ffc9] Running
	I0906 23:53:43.975574   22116 system_pods.go:89] "kube-proxy-5s52l" [a5166204-0e90-479a-9135-a20313f9af9a] Running
	I0906 23:53:43.975578   22116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-474162" [aa26d99e-5f02-4f00-b9d0-538d16d0dbed] Running
	I0906 23:53:43.975582   22116 system_pods.go:89] "storage-provisioner" [661f5565-e9d0-4d7c-99f2-10a8c2ea3d02] Running
	I0906 23:53:43.975587   22116 system_pods.go:126] duration metric: took 4.994693756s to wait for k8s-apps to be running ...
	I0906 23:53:43.975594   22116 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 23:53:43.975633   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:53:43.990067   22116 system_svc.go:56] duration metric: took 14.462821ms WaitForService to wait for kubelet.
	I0906 23:53:43.990099   22116 kubeadm.go:581] duration metric: took 6.16962024s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 23:53:43.990116   22116 node_conditions.go:102] verifying NodePressure condition ...
	I0906 23:53:43.993665   22116 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0906 23:53:43.993692   22116 node_conditions.go:123] node cpu capacity is 2
	I0906 23:53:43.993700   22116 node_conditions.go:105] duration metric: took 3.580045ms to run NodePressure ...
	I0906 23:53:43.993713   22116 start.go:228] waiting for startup goroutines ...
	I0906 23:53:43.993719   22116 start.go:233] waiting for cluster config update ...
	I0906 23:53:43.993728   22116 start.go:242] writing updated cluster config ...
	I0906 23:53:43.993997   22116 ssh_runner.go:195] Run: rm -f paused
	I0906 23:53:44.038725   22116 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0906 23:53:44.040876   22116 out.go:177] 
	W0906 23:53:44.042574   22116 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0906 23:53:44.044165   22116 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0906 23:53:44.045761   22116 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-474162" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-09-06 23:52:47 UTC, ends at Wed 2023-09-06 23:56:52 UTC. --
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.889454072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=110d9614-62bc-4899-8729-33ca106d5955 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.889735192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=110d9614-62bc-4899-8729-33ca106d5955 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.924554895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=751b74c8-6300-443e-9ff7-af1968e90a2b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.924656806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=751b74c8-6300-443e-9ff7-af1968e90a2b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.924892882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=751b74c8-6300-443e-9ff7-af1968e90a2b name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.960773570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99e920bd-d2d5-432f-873a-998e50980133 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.960867019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99e920bd-d2d5-432f-873a-998e50980133 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.961262845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99e920bd-d2d5-432f-873a-998e50980133 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.995507214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59bb8968-c7a8-4726-a037-bee65950839d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.995598799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59bb8968-c7a8-4726-a037-bee65950839d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:51 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:51.995866422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59bb8968-c7a8-4726-a037-bee65950839d name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.002248742Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=16d407bc-1826-48a9-a8de-daa2c5f92d5d name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.002617319Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-d4vlz,Uid:3f31bec6-6caa-429c-a2d0-2b5064520a8e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044594389667294,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:56:34.040101950Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&PodSandboxMetadata{Name:nginx,Uid:cae909fa-18da-4683-b981-bfee5420863a,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044452458595811,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:54:11.213540419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed06e0631866d26cbb66399149a35ca3b07845ac491034c18c1cf6bcc3351066,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:19f301d2-d9d3-44a6-9d85-b7c9f8cb7302,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694044442458852868,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f301d2-d9d3-44a6-9d85-b7c9f8cb7302,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-09-06T23:54:02.106790169Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-pt5wl,Uid:69318db7-18dd-4988-b5e1
-bd9934902b07,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694044432681840143,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:53:44.840719516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-hjgdk,Uid:34e4f9f1-54a3-4936-a2ab-acc954a1861b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694044425332044622,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 93b2aa87-85dd-4793-8b21-106f8f0b34b4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:53:44.989229704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-n82hc,Uid:6ceafa78-32bc-4e97-8f2c-b27b2c624846,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694044425226068683,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: ed22f663-9baf-4da9-ad20-4b448bc07794,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:53:44.885565362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-89cxc,Uid:0a812a2a-1318-42f1-be9e-f78873a7c88d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044421851672961,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:53:40.004198147Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71d5e1135
98096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044420789714493,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storag
e-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-06T23:53:38.948225069Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&PodSandboxMetadata{Name:kube-proxy-5s52l,Uid:a5166204-0e90-479a-9135-a20313f9af9a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044419155344570,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-06T23:53:38.811380487Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-474162,Uid:ab339f514cba0315da55d49dd2a75764,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044394660995470,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.53:2379,kubernetes.io/config.hash: ab339f514cba0315da55d49dd2a75764,kubernetes.io/config.seen: 2023-09-06T23:53:13.876305581Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:30c5f3d28fbba26503a44f1f04c3372ecb504fe90ad91012cab8876fe601c5ef,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-474162,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044394636663448,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-09-06T23:53:13.874892246Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388e52de8dbf80444c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-474162,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044394631261734,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-09-06T23:53:13.869537318Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316b87afc62a00d38d0a9793,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-474162,Uid:c9f0b06ebe99090d9bfd5321d5239fcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694044394577556182,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.53:8443,kubernetes.io/config.hash: c9f0b06ebe99090d9bfd5321d5239fcf,kubernetes.
io/config.seen: 2023-09-06T23:53:13.863517775Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=16d407bc-1826-48a9-a8de-daa2c5f92d5d name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.003391022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10f1ba58-80f8-4b44-9b36-bf873a310fb2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.003472766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10f1ba58-80f8-4b44-9b36-bf873a310fb2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.003997457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10f1ba58-80f8-4b44-9b36-bf873a310fb2 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.032312824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4345965-1fe6-4b98-b5d9-3599fe42090f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.032407347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4345965-1fe6-4b98-b5d9-3599fe42090f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.032729022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4345965-1fe6-4b98-b5d9-3599fe42090f name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.068293948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de4f7371-ef76-4752-98e2-1a79cb732721 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.068391996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de4f7371-ef76-4752-98e2-1a79cb732721 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.068720822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de4f7371-ef76-4752-98e2-1a79cb732721 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.103314457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62420b4b-44aa-4e3f-99ab-cbfab5717147 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.103422040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62420b4b-44aa-4e3f-99ab-cbfab5717147 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 06 23:56:52 ingress-addon-legacy-474162 crio[716]: time="2023-09-06 23:56:52.103833336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95c9cb0cfb863ac493ff99f1aec1135cfb9c586829af69e93859f686dc330670,PodSandboxId:b42f8e8dbe4b3e31d2bd9720b5c9d82d5ae25c16470d52af765b7a43621a3505,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694044598026799068,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-d4vlz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f31bec6-6caa-429c-a2d0-2b5064520a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 40bb5ae8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b839a383e557a72574dd789f8233aa27eb068d6c042dc352e0e1ea09305204,PodSandboxId:7ebdbb807711b50a5181da28941f381f2355799345fc92f823b1be158ee6a3a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694044457540525662,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cae909fa-18da-4683-b981-bfee5420863a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d408c52c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e7903bf4eb16e365a2ec65a3e0485da865ec925bb3bbb39da495366f5c104f,PodSandboxId:f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694044440030075260,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-pt5wl,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 69318db7-18dd-4988-b5e1-bd9934902b07,},Annotations:map[string]string{io.kubernetes.container.hash: 5128d650,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f2f0d2cdcf9fcc20d304f784659d1da19499998dc1e109087b4b04750d179d1,PodSandboxId:a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044430755701950,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hjgdk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34e4f9f1-54a3-4936-a2ab-acc954a1861b,},Annotations:map[string]string{io.kubernetes.container.hash: d7365f63,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2522f5e8a2c6e9e6011ec0ba3e8d7bcedbf04c4fd39bdbc89628d2c6687887e4,PodSandboxId:01569997aba2c49c8bdb8ab812cfcef8ee0b75a28096ed91e974f590e5a6335c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694044429486368343,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n82hc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ceafa78-32bc-4e97-8f2c-b27b2c624846,},Annotations:map[string]string{io.kubernetes.container.hash: b055543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e,PodSandboxId:64370c66223cab4f86c8f2598639650897d708b8f08c2ee5bc8e0b33648943c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694044422029245734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-89cxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a812a2a-1318-42f1-be9e-f78873a7c88d,},Annotations:map[string]string{io.kubernetes.container.hash: ae1635b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b4915e994b2c9a74012cd1c17
3617d8b0a08bfa07ab728e8c95e59cb982dd,PodSandboxId:71d5e113598096d48d18c0821249e0ffe6c219255298e0420a450e45536ab213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694044421141422521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661f5565-e9d0-4d7c-99f2-10a8c2ea3d02,},Annotations:map[string]string{io.kubernetes.container.hash: ff1649e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a57e794c1fe7c78b410a98d8a5618
e7faef9a5bc7a4783092527d5e7aa62f0e6,PodSandboxId:2cea3f6e09783f323be08dd5c591f1984cb8a83f20209e5dd872cbc80cacb149,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694044419465554564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5s52l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5166204-0e90-479a-9135-a20313f9af9a,},Annotations:map[string]string{io.kubernetes.container.hash: da2b0be1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8,PodS
andboxId:1cec9aa1b2ac9da7b5b6fcf5575f9331fc1e1b7d5132247e9d5c784de6f0bec8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694044396339419978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab339f514cba0315da55d49dd2a75764,},Annotations:map[string]string{io.kubernetes.container.hash: fb0e2a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f,PodSandboxId:30c5f3d28fbba26503a44f1f04c3372ecb504
fe90ad91012cab8876fe601c5ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694044395619743994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c,PodSandboxId:6fed6692c8d9fddd7f4ebfb563eefff39d54665e316
b87afc62a00d38d0a9793,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694044395092018894,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f0b06ebe99090d9bfd5321d5239fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 8a4558f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597,PodSandboxId:6201b23ee6608f4d3520dcf72bd73e9c897936a630890a388
e52de8dbf80444c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694044395109250967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-474162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62420b4b-44aa-4e3f-99ab-cbfab5717147 name=/runtime.v1alpha2.Runti
meService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	95c9cb0cfb863       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            14 seconds ago      Running             hello-world-app           0                   b42f8e8dbe4b3
	65b839a383e55       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   7ebdbb807711b
	66e7903bf4eb1       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   f497bb1f4e425
	8f2f0d2cdcf9f       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   a06a58c784bcc
	2522f5e8a2c6e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   01569997aba2c
	017ff28f9887b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   64370c66223ca
	b34b4915e994b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   71d5e11359809
	a57e794c1fe7c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   2cea3f6e09783
	01bebdff4ca18       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   1cec9aa1b2ac9
	e73674c50ee79       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   30c5f3d28fbba
	d352b6b2a4243       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   6201b23ee6608
	39361a5a71bc0       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   6fed6692c8d9f
	
	* 
	* ==> coredns [017ff28f9887bdcbf047aa62807d3c23ae52eb9faa8746c325810575153e6b7e] <==
	* [INFO] 10.244.0.5:53476 - 50352 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101241s
	[INFO] 10.244.0.5:33270 - 24242 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085194s
	[INFO] 10.244.0.5:53476 - 12345 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000610331s
	[INFO] 10.244.0.5:53476 - 5405 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.003006382s
	[INFO] 10.244.0.5:33270 - 3423 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000143651s
	[INFO] 10.244.0.5:33270 - 21125 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000186548s
	[INFO] 10.244.0.5:53476 - 3824 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000289484s
	[INFO] 10.244.0.5:53476 - 2774 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000926292s
	[INFO] 10.244.0.5:33270 - 39259 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000372029s
	[INFO] 10.244.0.5:33270 - 26337 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0002218s
	[INFO] 10.244.0.5:33270 - 24303 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000195634s
	[INFO] 10.244.0.5:43294 - 31924 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092528s
	[INFO] 10.244.0.5:51626 - 50210 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000128383s
	[INFO] 10.244.0.5:43294 - 396 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053438s
	[INFO] 10.244.0.5:43294 - 4875 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000653s
	[INFO] 10.244.0.5:43294 - 3631 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037269s
	[INFO] 10.244.0.5:43294 - 37188 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037915s
	[INFO] 10.244.0.5:51626 - 62694 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062858s
	[INFO] 10.244.0.5:43294 - 64836 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040326s
	[INFO] 10.244.0.5:43294 - 38884 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071543s
	[INFO] 10.244.0.5:51626 - 55132 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000081311s
	[INFO] 10.244.0.5:51626 - 11587 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070026s
	[INFO] 10.244.0.5:51626 - 53637 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037057s
	[INFO] 10.244.0.5:51626 - 14732 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082025s
	[INFO] 10.244.0.5:51626 - 47859 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036009s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-474162
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-474162
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=ingress-addon-legacy-474162
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T23_53_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:53:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-474162
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:56:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:54:33 +0000   Wed, 06 Sep 2023 23:53:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:54:33 +0000   Wed, 06 Sep 2023 23:53:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:54:33 +0000   Wed, 06 Sep 2023 23:53:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 23:54:33 +0000   Wed, 06 Sep 2023 23:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ingress-addon-legacy-474162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 df26ff05eb32494a9d411caea2b609f4
	  System UUID:                df26ff05-eb32-494a-9d41-1caea2b609f4
	  Boot ID:                    98716215-4ff3-43f6-8a93-1a167c6b7b6b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-d4vlz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-89cxc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m14s
	  kube-system                 etcd-ingress-addon-legacy-474162                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-474162             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-474162    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-proxy-5s52l                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-scheduler-ingress-addon-legacy-474162             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m29s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m29s  kubelet     Node ingress-addon-legacy-474162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s  kubelet     Node ingress-addon-legacy-474162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s  kubelet     Node ingress-addon-legacy-474162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m19s  kubelet     Node ingress-addon-legacy-474162 status is now: NodeReady
	  Normal  Starting                 3m13s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep 6 23:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.095863] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.364142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.440525] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142079] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.324505] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 23:53] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.125793] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.148082] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.117791] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.218930] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +8.323907] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +2.802962] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.445767] systemd-fstab-generator[1420]: Ignoring "noauto" for root device
	[ +16.657011] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.148414] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.059106] kauditd_printk_skb: 4 callbacks suppressed
	[Sep 6 23:54] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.720107] kauditd_printk_skb: 3 callbacks suppressed
	[Sep 6 23:56] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [01bebdff4ca18697cd265665e690a2b7c42d4c115df6bd70e1b7f0f5bdb665d8] <==
	* 2023-09-06 23:53:16.496573 W | auth: simple token is not cryptographically signed
	2023-09-06 23:53:16.501810 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-06 23:53:16.506827 I | etcdserver: 8389b8f6c4f004d4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-06 23:53:16.508288 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-06 23:53:16.508470 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-06 23:53:16.508573 I | embed: listening for peers on 192.168.39.53:2380
	raft2023/09/06 23:53:16 INFO: 8389b8f6c4f004d4 switched to configuration voters=(9478310260783449300)
	2023-09-06 23:53:16.509032 I | etcdserver/membership: added member 8389b8f6c4f004d4 [https://192.168.39.53:2380] to cluster 1138cde6dcc1ce27
	raft2023/09/06 23:53:16 INFO: 8389b8f6c4f004d4 is starting a new election at term 1
	raft2023/09/06 23:53:16 INFO: 8389b8f6c4f004d4 became candidate at term 2
	raft2023/09/06 23:53:16 INFO: 8389b8f6c4f004d4 received MsgVoteResp from 8389b8f6c4f004d4 at term 2
	raft2023/09/06 23:53:16 INFO: 8389b8f6c4f004d4 became leader at term 2
	raft2023/09/06 23:53:16 INFO: raft.node: 8389b8f6c4f004d4 elected leader 8389b8f6c4f004d4 at term 2
	2023-09-06 23:53:16.889207 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-06 23:53:16.890786 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-06 23:53:16.890832 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-06 23:53:16.890852 I | etcdserver: published {Name:ingress-addon-legacy-474162 ClientURLs:[https://192.168.39.53:2379]} to cluster 1138cde6dcc1ce27
	2023-09-06 23:53:16.890872 I | embed: ready to serve client requests
	2023-09-06 23:53:16.891120 I | embed: ready to serve client requests
	2023-09-06 23:53:16.892451 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-06 23:53:16.893508 I | embed: serving client requests on 192.168.39.53:2379
	2023-09-06 23:53:38.803992 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (520.795097ms) to execute
	2023-09-06 23:53:38.804424 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3894" took too long (473.971495ms) to execute
	2023-09-06 23:53:38.804532 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (213.617271ms) to execute
	2023-09-06 23:53:38.804660 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (283.330666ms) to execute
	
	* 
	* ==> kernel <==
	*  23:56:52 up 4 min,  0 users,  load average: 1.80, 0.56, 0.21
	Linux ingress-addon-legacy-474162 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39361a5a71bc01c13b5b5c9019589961977ed794c976936e6baad6ee86f2226c] <==
	* Trace[743481832]: [607.555584ms] [516.939177ms] Transaction committed
	I0906 23:53:38.806737       1 trace.go:116] Trace[1983383137]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (started: 2023-09-06 23:53:38.272723962 +0000 UTC m=+22.983344976) (total time: 533.999387ms):
	Trace[1983383137]: [533.950502ms] [532.804057ms] Transaction committed
	I0906 23:53:38.806856       1 trace.go:116] Trace[134446543]: "Update" url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.53 (started: 2023-09-06 23:53:38.272546295 +0000 UTC m=+22.983167300) (total time: 534.293942ms):
	Trace[134446543]: [534.234496ms] [534.11312ms] Object stored in database
	I0906 23:53:38.807016       1 trace.go:116] Trace[146430707]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2023-09-06 23:53:38.152777485 +0000 UTC m=+22.863398528) (total time: 654.229176ms):
	Trace[146430707]: [654.2136ms] [529.590908ms] Transaction committed
	I0906 23:53:38.807099       1 trace.go:116] Trace[18393122]: "Patch" url:/api/v1/namespaces/kube-system/events/coredns-66bff467f8-89cxc.1782746c3dd4ca98,user-agent:kube-scheduler/v1.18.20 (linux/amd64) kubernetes/1f3e19b/scheduler,client:192.168.39.53 (started: 2023-09-06 23:53:38.152717127 +0000 UTC m=+22.863338176) (total time: 654.367973ms):
	Trace[18393122]: [115.482534ms] [115.455256ms] About to apply patch
	Trace[18393122]: [654.336187ms] [534.928689ms] Object stored in database
	I0906 23:53:38.807565       1 trace.go:116] Trace[1592137267]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2023-09-06 23:53:38.30520659 +0000 UTC m=+23.015827607) (total time: 502.345199ms):
	Trace[1592137267]: [502.345199ms] [502.201407ms] END
	I0906 23:53:38.819426       1 trace.go:116] Trace[1246503849]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:deployment-controller,client:192.168.39.53 (started: 2023-09-06 23:53:38.304599785 +0000 UTC m=+23.015220797) (total time: 514.807877ms):
	Trace[1246503849]: [514.807877ms] [514.42277ms] END
	I0906 23:53:38.809029       1 trace.go:116] Trace[873956204]: "Get" url:/api/v1/namespaces/kube-system/configmaps/coredns,user-agent:kubectl/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:127.0.0.1 (started: 2023-09-06 23:53:38.259953624 +0000 UTC m=+22.970574634) (total time: 549.051599ms):
	Trace[873956204]: [549.010981ms] [549.001174ms] About to write a response
	I0906 23:53:38.809444       1 trace.go:116] Trace[65883324]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2023-09-06 23:53:38.298002154 +0000 UTC m=+23.008623166) (total time: 511.428745ms):
	Trace[65883324]: [511.228065ms] [508.624481ms] Transaction committed
	I0906 23:53:38.820272       1 trace.go:116] Trace[1484792274]: "Create" url:/api/v1/namespaces/kube-system/pods/kube-proxy-5s52l/binding,user-agent:kube-scheduler/v1.18.20 (linux/amd64) kubernetes/1f3e19b/scheduler,client:192.168.39.53 (started: 2023-09-06 23:53:38.2974576 +0000 UTC m=+23.008078598) (total time: 522.790472ms):
	Trace[1484792274]: [522.770364ms] [522.632775ms] Object stored in database
	I0906 23:53:38.810644       1 trace.go:116] Trace[1793964076]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-474162,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.53 (started: 2023-09-06 23:53:38.198731291 +0000 UTC m=+22.909352283) (total time: 611.891031ms):
	Trace[1793964076]: [607.775844ms] [583.864337ms] Object stored in database
	I0906 23:53:44.784841       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0906 23:54:10.953289       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0906 23:56:44.714231       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [d352b6b2a4243c31cebe92c8c761469f1641a756640677dccab21195d758a597] <==
	* I0906 23:53:38.083366       1 disruption.go:339] Sending events to api server.
	I0906 23:53:38.102756       1 shared_informer.go:230] Caches are synced for taint 
	I0906 23:53:38.102860       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0906 23:53:38.102918       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-474162. Assuming now as a timestamp.
	I0906 23:53:38.102946       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0906 23:53:38.121411       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0906 23:53:38.131901       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-474162", UID:"04b6d19d-4531-4c27-9578-0a53746f39bd", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-474162 event: Registered Node ingress-addon-legacy-474162 in Controller
	I0906 23:53:38.204033       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 23:53:38.220774       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 23:53:38.233472       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0906 23:53:38.246252       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"34b5225d-46c8-4b82-9960-09bc2c972763", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-5s52l
	I0906 23:53:38.261444       1 shared_informer.go:230] Caches are synced for attach detach 
	I0906 23:53:38.284220       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 23:53:38.322076       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 23:53:38.322279       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0906 23:53:38.861667       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"34b5225d-46c8-4b82-9960-09bc2c972763", ResourceVersion:"212", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63829641203, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e1d060), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc000e1d080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e1d0a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00041c4c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc000e1d0c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e1d0e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e1d160)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000df3130), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000bc6f98), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000897f10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0012a82e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000bc6ff8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0906 23:53:44.784197       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"750e09cd-b823-45d0-a3a9-56299a0f88e0", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0906 23:53:44.817127       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"c3e54b09-c8aa-47ea-bcf8-ef13817a1487", APIVersion:"apps/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-pt5wl
	I0906 23:53:44.850899       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ed22f663-9baf-4da9-ad20-4b448bc07794", APIVersion:"batch/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-n82hc
	I0906 23:53:44.955552       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"93b2aa87-85dd-4793-8b21-106f8f0b34b4", APIVersion:"batch/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hjgdk
	I0906 23:53:49.921381       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ed22f663-9baf-4da9-ad20-4b448bc07794", APIVersion:"batch/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 23:53:51.932246       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"93b2aa87-85dd-4793-8b21-106f8f0b34b4", APIVersion:"batch/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 23:56:34.023983       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"95d06436-8cf1-4af7-b19d-e5fd7a233606", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0906 23:56:34.050578       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"fcebae8f-53d1-4dbd-8f0b-cdd7caff5977", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-d4vlz
	
	* 
	* ==> kube-proxy [a57e794c1fe7c78b410a98d8a5618e7faef9a5bc7a4783092527d5e7aa62f0e6] <==
	* W0906 23:53:39.693961       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0906 23:53:39.703674       1 node.go:136] Successfully retrieved node IP: 192.168.39.53
	I0906 23:53:39.703757       1 server_others.go:186] Using iptables Proxier.
	I0906 23:53:39.704008       1 server.go:583] Version: v1.18.20
	I0906 23:53:39.706009       1 config.go:315] Starting service config controller
	I0906 23:53:39.706056       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0906 23:53:39.706088       1 config.go:133] Starting endpoints config controller
	I0906 23:53:39.706119       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0906 23:53:39.807464       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0906 23:53:39.807545       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e73674c50ee793828c1feddd9338b2524e01d0709f52665356806693ee313a0f] <==
	* I0906 23:53:19.992436       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 23:53:19.992480       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 23:53:19.993946       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0906 23:53:19.994099       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:53:19.994106       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:53:19.994115       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0906 23:53:20.001596       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 23:53:20.002249       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 23:53:20.002581       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 23:53:20.002819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:53:20.002878       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:53:20.002969       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:53:20.003054       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:53:20.003194       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 23:53:20.003280       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:53:20.003394       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:53:20.003447       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:53:20.003753       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:53:20.902556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:53:20.958757       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 23:53:20.977931       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:53:20.987093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:53:21.140721       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0906 23:53:23.794342       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0906 23:53:38.249555       1 factory.go:503] pod: kube-system/coredns-66bff467f8-89cxc is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:52:47 UTC, ends at Wed 2023-09-06 23:56:52 UTC. --
	Sep 06 23:53:52 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:53:52.178229    1427 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-mprb9" (UniqueName: "kubernetes.io/secret/34e4f9f1-54a3-4936-a2ab-acc954a1861b-ingress-nginx-admission-token-mprb9") on node "ingress-addon-legacy-474162" DevicePath ""
	Sep 06 23:53:52 ingress-addon-legacy-474162 kubelet[1427]: W0906 23:53:52.920661    1427 pod_container_deletor.go:77] Container "a06a58c784bccb3f0bd4eb22ac78e4d5c810b71512a224aeba3312fca6b5ae3a" not found in pod's containers
	Sep 06 23:54:02 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:54:02.107352    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 06 23:54:02 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:54:02.115914    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-mrltl" (UniqueName: "kubernetes.io/secret/19f301d2-d9d3-44a6-9d85-b7c9f8cb7302-minikube-ingress-dns-token-mrltl") pod "kube-ingress-dns-minikube" (UID: "19f301d2-d9d3-44a6-9d85-b7c9f8cb7302")
	Sep 06 23:54:11 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:54:11.214206    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 06 23:54:11 ingress-addon-legacy-474162 kubelet[1427]: E0906 23:54:11.232773    1427 reflector.go:178] object-"default"/"default-token-tmmcn": Failed to list *v1.Secret: secrets "default-token-tmmcn" is forbidden: User "system:node:ingress-addon-legacy-474162" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-474162" and this object
	Sep 06 23:54:11 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:54:11.347303    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tmmcn" (UniqueName: "kubernetes.io/secret/cae909fa-18da-4683-b981-bfee5420863a-default-token-tmmcn") pod "nginx" (UID: "cae909fa-18da-4683-b981-bfee5420863a")
	Sep 06 23:56:34 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:34.040629    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 06 23:56:34 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:34.228211    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tmmcn" (UniqueName: "kubernetes.io/secret/3f31bec6-6caa-429c-a2d0-2b5064520a8e-default-token-tmmcn") pod "hello-world-app-5f5d8b66bb-d4vlz" (UID: "3f31bec6-6caa-429c-a2d0-2b5064520a8e")
	Sep 06 23:56:35 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:35.939541    1427 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51b0cd83af0afbad71dd55c94905a229a841c37fa024113493024498750e3018
	Sep 06 23:56:35 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:35.976929    1427 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51b0cd83af0afbad71dd55c94905a229a841c37fa024113493024498750e3018
	Sep 06 23:56:35 ingress-addon-legacy-474162 kubelet[1427]: E0906 23:56:35.977521    1427 remote_runtime.go:295] ContainerStatus "51b0cd83af0afbad71dd55c94905a229a841c37fa024113493024498750e3018" from runtime service failed: rpc error: code = NotFound desc = could not find container "51b0cd83af0afbad71dd55c94905a229a841c37fa024113493024498750e3018": container with ID starting with 51b0cd83af0afbad71dd55c94905a229a841c37fa024113493024498750e3018 not found: ID does not exist
	Sep 06 23:56:36 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:36.135360    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-mrltl" (UniqueName: "kubernetes.io/secret/19f301d2-d9d3-44a6-9d85-b7c9f8cb7302-minikube-ingress-dns-token-mrltl") pod "19f301d2-d9d3-44a6-9d85-b7c9f8cb7302" (UID: "19f301d2-d9d3-44a6-9d85-b7c9f8cb7302")
	Sep 06 23:56:36 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:36.149737    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19f301d2-d9d3-44a6-9d85-b7c9f8cb7302-minikube-ingress-dns-token-mrltl" (OuterVolumeSpecName: "minikube-ingress-dns-token-mrltl") pod "19f301d2-d9d3-44a6-9d85-b7c9f8cb7302" (UID: "19f301d2-d9d3-44a6-9d85-b7c9f8cb7302"). InnerVolumeSpecName "minikube-ingress-dns-token-mrltl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:56:36 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:36.235702    1427 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-mrltl" (UniqueName: "kubernetes.io/secret/19f301d2-d9d3-44a6-9d85-b7c9f8cb7302-minikube-ingress-dns-token-mrltl") on node "ingress-addon-legacy-474162" DevicePath ""
	Sep 06 23:56:44 ingress-addon-legacy-474162 kubelet[1427]: E0906 23:56:44.693682    1427 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pt5wl.17827497b047e2e5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pt5wl", UID:"69318db7-18dd-4988-b5e1-bd9934902b07", APIVersion:"v1", ResourceVersion:"426", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-474162"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc136624f29318ae5, ext:201515446215, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc136624f29318ae5, ext:201515446215, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pt5wl.17827497b047e2e5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 23:56:44 ingress-addon-legacy-474162 kubelet[1427]: E0906 23:56:44.707774    1427 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pt5wl.17827497b047e2e5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pt5wl", UID:"69318db7-18dd-4988-b5e1-bd9934902b07", APIVersion:"v1", ResourceVersion:"426", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-474162"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc136624f29318ae5, ext:201515446215, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc136624f29d97e5a, ext:201526453049, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pt5wl.17827497b047e2e5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 23:56:46 ingress-addon-legacy-474162 kubelet[1427]: W0906 23:56:46.989331    1427 pod_container_deletor.go:77] Container "f497bb1f4e425f73592e126de56bccf5f5cd687e567dcb5741c7cd4fa37dd8d9" not found in pod's containers
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.880284    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-webhook-cert") pod "69318db7-18dd-4988-b5e1-bd9934902b07" (UID: "69318db7-18dd-4988-b5e1-bd9934902b07")
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.880341    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sfjsh" (UniqueName: "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-ingress-nginx-token-sfjsh") pod "69318db7-18dd-4988-b5e1-bd9934902b07" (UID: "69318db7-18dd-4988-b5e1-bd9934902b07")
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.882749    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "69318db7-18dd-4988-b5e1-bd9934902b07" (UID: "69318db7-18dd-4988-b5e1-bd9934902b07"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.885752    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-ingress-nginx-token-sfjsh" (OuterVolumeSpecName: "ingress-nginx-token-sfjsh") pod "69318db7-18dd-4988-b5e1-bd9934902b07" (UID: "69318db7-18dd-4988-b5e1-bd9934902b07"). InnerVolumeSpecName "ingress-nginx-token-sfjsh". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.980750    1427 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sfjsh" (UniqueName: "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-ingress-nginx-token-sfjsh") on node "ingress-addon-legacy-474162" DevicePath ""
	Sep 06 23:56:48 ingress-addon-legacy-474162 kubelet[1427]: I0906 23:56:48.980785    1427 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/69318db7-18dd-4988-b5e1-bd9934902b07-webhook-cert") on node "ingress-addon-legacy-474162" DevicePath ""
	Sep 06 23:56:49 ingress-addon-legacy-474162 kubelet[1427]: W0906 23:56:49.794621    1427 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/69318db7-18dd-4988-b5e1-bd9934902b07/volumes" does not exist
	
	* 
	* ==> storage-provisioner [b34b4915e994b2c9a74012cd1c173617d8b0a08bfa07ab728e8c95e59cb982dd] <==
	* I0906 23:53:41.237710       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:53:41.246923       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:53:41.247008       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:53:41.255251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:53:41.255478       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-474162_df2e3b1c-f481-405b-9e66-997b7f538053!
	I0906 23:53:41.261339       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d87f3404-4be4-4178-a504-b74ce607aef5", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-474162_df2e3b1c-f481-405b-9e66-997b7f538053 became leader
	I0906 23:53:41.356539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-474162_df2e3b1c-f481-405b-9e66-997b7f538053!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-474162 -n ingress-addon-legacy-474162
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-474162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (170.98s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-375099"  ...
Cannot use for:
Stopping node "json-output-375099"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ea5c3679-af91-4239-b94f-8231540e600b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-375099\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eb8a621e-bb43-41e4-bc34-282aa8ce8d11
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-375099\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7b437c30-97bc-4595-8248-30a973ed1502
datacontenttype: application/json
Data,
{
"currentstep": "2",
"message": "1 node stopped.",
"name": "Done",
"totalsteps": "2"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ea5c3679-af91-4239-b94f-8231540e600b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-375099\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eb8a621e-bb43-41e4-bc34-282aa8ce8d11
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-375099\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7b437c30-97bc-4595-8248-30a973ed1502
datacontenttype: application/json
Data,
{
"currentstep": "2",
"message": "1 node stopped.",
"name": "Done",
"totalsteps": "2"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (171.373766ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-mq552): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- sh -c "ping -c 1 192.168.39.1": exit status 1 (166.383151ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-zvzjl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-816061 -n multinode-816061
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-816061 logs -n 25: (1.305577253s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-643751 ssh -- ls                    | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:02 UTC | 07 Sep 23 00:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-643751 ssh --                       | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:02 UTC | 07 Sep 23 00:02 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-643751                           | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:02 UTC | 07 Sep 23 00:02 UTC |
	| start   | -p mount-start-2-643751                           | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:02 UTC | 07 Sep 23 00:03 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC |                     |
	|         | --profile mount-start-2-643751                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-643751 ssh -- ls                    | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC | 07 Sep 23 00:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-643751 ssh --                       | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC | 07 Sep 23 00:03 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-643751                           | mount-start-2-643751 | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC | 07 Sep 23 00:03 UTC |
	| delete  | -p mount-start-1-624661                           | mount-start-1-624661 | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC | 07 Sep 23 00:03 UTC |
	| start   | -p multinode-816061                               | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:03 UTC | 07 Sep 23 00:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- apply -f                   | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- rollout                    | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- get pods -o                | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- get pods -o                | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-mq552 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-zvzjl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-mq552 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-zvzjl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-mq552 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-zvzjl -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- get pods -o                | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-mq552                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC |                     |
	|         | busybox-5bc68d56bd-mq552 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC | 07 Sep 23 00:05 UTC |
	|         | busybox-5bc68d56bd-zvzjl                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-816061 -- exec                       | multinode-816061     | jenkins | v1.31.2 | 07 Sep 23 00:05 UTC |                     |
	|         | busybox-5bc68d56bd-zvzjl -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:03:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:03:18.661905   26504 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:03:18.662040   26504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:03:18.662049   26504 out.go:309] Setting ErrFile to fd 2...
	I0907 00:03:18.662056   26504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:03:18.662266   26504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:03:18.662883   26504 out.go:303] Setting JSON to false
	I0907 00:03:18.663741   26504 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2743,"bootTime":1694042256,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:03:18.663798   26504 start.go:138] virtualization: kvm guest
	I0907 00:03:18.666154   26504 out.go:177] * [multinode-816061] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:03:18.667754   26504 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:03:18.667791   26504 notify.go:220] Checking for updates...
	I0907 00:03:18.669294   26504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:03:18.670787   26504 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:03:18.672293   26504 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:03:18.673630   26504 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:03:18.674981   26504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:03:18.676676   26504 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:03:18.712151   26504 out.go:177] * Using the kvm2 driver based on user configuration
	I0907 00:03:18.713583   26504 start.go:298] selected driver: kvm2
	I0907 00:03:18.713598   26504 start.go:902] validating driver "kvm2" against <nil>
	I0907 00:03:18.713611   26504 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:03:18.714504   26504 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:03:18.714592   26504 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:03:18.729491   26504 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:03:18.729532   26504 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0907 00:03:18.729727   26504 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:03:18.729756   26504 cni.go:84] Creating CNI manager for ""
	I0907 00:03:18.729760   26504 cni.go:136] 0 nodes found, recommending kindnet
	I0907 00:03:18.729765   26504 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0907 00:03:18.729774   26504 start_flags.go:321] config:
	{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:03:18.729893   26504 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:03:18.732614   26504 out.go:177] * Starting control plane node multinode-816061 in cluster multinode-816061
	I0907 00:03:18.734190   26504 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:03:18.734219   26504 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:03:18.734229   26504 cache.go:57] Caching tarball of preloaded images
	I0907 00:03:18.734322   26504 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:03:18.734334   26504 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:03:18.734594   26504 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:03:18.734611   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json: {Name:mk892c417445a877963a5bc59c335dc4c23d6f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:18.734731   26504 start.go:365] acquiring machines lock for multinode-816061: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:03:18.734758   26504 start.go:369] acquired machines lock for "multinode-816061" in 16.086µs
	I0907 00:03:18.734772   26504 start.go:93] Provisioning new machine with config: &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:03:18.734879   26504 start.go:125] createHost starting for "" (driver="kvm2")
	I0907 00:03:18.737528   26504 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0907 00:03:18.737692   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:03:18.737742   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:03:18.751665   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0907 00:03:18.752081   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:03:18.752614   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:03:18.752643   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:03:18.752970   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:03:18.753176   26504 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:03:18.753373   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:18.753592   26504 start.go:159] libmachine.API.Create for "multinode-816061" (driver="kvm2")
	I0907 00:03:18.753617   26504 client.go:168] LocalClient.Create starting
	I0907 00:03:18.753641   26504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 00:03:18.753672   26504 main.go:141] libmachine: Decoding PEM data...
	I0907 00:03:18.753687   26504 main.go:141] libmachine: Parsing certificate...
	I0907 00:03:18.753735   26504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 00:03:18.753752   26504 main.go:141] libmachine: Decoding PEM data...
	I0907 00:03:18.753764   26504 main.go:141] libmachine: Parsing certificate...
	I0907 00:03:18.753780   26504 main.go:141] libmachine: Running pre-create checks...
	I0907 00:03:18.753790   26504 main.go:141] libmachine: (multinode-816061) Calling .PreCreateCheck
	I0907 00:03:18.754073   26504 main.go:141] libmachine: (multinode-816061) Calling .GetConfigRaw
	I0907 00:03:18.754555   26504 main.go:141] libmachine: Creating machine...
	I0907 00:03:18.754582   26504 main.go:141] libmachine: (multinode-816061) Calling .Create
	I0907 00:03:18.754720   26504 main.go:141] libmachine: (multinode-816061) Creating KVM machine...
	I0907 00:03:18.755964   26504 main.go:141] libmachine: (multinode-816061) DBG | found existing default KVM network
	I0907 00:03:18.756555   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:18.756435   26527 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029240}
	I0907 00:03:18.761729   26504 main.go:141] libmachine: (multinode-816061) DBG | trying to create private KVM network mk-multinode-816061 192.168.39.0/24...
	I0907 00:03:18.830054   26504 main.go:141] libmachine: (multinode-816061) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061 ...
	I0907 00:03:18.830083   26504 main.go:141] libmachine: (multinode-816061) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 00:03:18.830091   26504 main.go:141] libmachine: (multinode-816061) DBG | private KVM network mk-multinode-816061 192.168.39.0/24 created
	I0907 00:03:18.830110   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:18.830035   26527 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:03:18.830142   26504 main.go:141] libmachine: (multinode-816061) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 00:03:19.028487   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:19.028367   26527 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa...
	I0907 00:03:19.218297   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:19.218151   26527 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/multinode-816061.rawdisk...
	I0907 00:03:19.218347   26504 main.go:141] libmachine: (multinode-816061) DBG | Writing magic tar header
	I0907 00:03:19.218362   26504 main.go:141] libmachine: (multinode-816061) DBG | Writing SSH key tar header
	I0907 00:03:19.218372   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061 (perms=drwx------)
	I0907 00:03:19.218383   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:19.218256   26527 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061 ...
	I0907 00:03:19.218395   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:03:19.218412   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 00:03:19.218434   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061
	I0907 00:03:19.218444   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 00:03:19.218456   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:03:19.218463   26504 main.go:141] libmachine: (multinode-816061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:03:19.218470   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 00:03:19.218480   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:03:19.218486   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 00:03:19.218493   26504 main.go:141] libmachine: (multinode-816061) Creating domain...
	I0907 00:03:19.218500   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:03:19.218507   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home/jenkins
	I0907 00:03:19.218514   26504 main.go:141] libmachine: (multinode-816061) DBG | Checking permissions on dir: /home
	I0907 00:03:19.218522   26504 main.go:141] libmachine: (multinode-816061) DBG | Skipping /home - not owner
	I0907 00:03:19.219568   26504 main.go:141] libmachine: (multinode-816061) define libvirt domain using xml: 
	I0907 00:03:19.219591   26504 main.go:141] libmachine: (multinode-816061) <domain type='kvm'>
	I0907 00:03:19.219598   26504 main.go:141] libmachine: (multinode-816061)   <name>multinode-816061</name>
	I0907 00:03:19.219603   26504 main.go:141] libmachine: (multinode-816061)   <memory unit='MiB'>2200</memory>
	I0907 00:03:19.219609   26504 main.go:141] libmachine: (multinode-816061)   <vcpu>2</vcpu>
	I0907 00:03:19.219616   26504 main.go:141] libmachine: (multinode-816061)   <features>
	I0907 00:03:19.219628   26504 main.go:141] libmachine: (multinode-816061)     <acpi/>
	I0907 00:03:19.219638   26504 main.go:141] libmachine: (multinode-816061)     <apic/>
	I0907 00:03:19.219644   26504 main.go:141] libmachine: (multinode-816061)     <pae/>
	I0907 00:03:19.219653   26504 main.go:141] libmachine: (multinode-816061)     
	I0907 00:03:19.219664   26504 main.go:141] libmachine: (multinode-816061)   </features>
	I0907 00:03:19.219670   26504 main.go:141] libmachine: (multinode-816061)   <cpu mode='host-passthrough'>
	I0907 00:03:19.219675   26504 main.go:141] libmachine: (multinode-816061)   
	I0907 00:03:19.219682   26504 main.go:141] libmachine: (multinode-816061)   </cpu>
	I0907 00:03:19.219687   26504 main.go:141] libmachine: (multinode-816061)   <os>
	I0907 00:03:19.219695   26504 main.go:141] libmachine: (multinode-816061)     <type>hvm</type>
	I0907 00:03:19.219701   26504 main.go:141] libmachine: (multinode-816061)     <boot dev='cdrom'/>
	I0907 00:03:19.219708   26504 main.go:141] libmachine: (multinode-816061)     <boot dev='hd'/>
	I0907 00:03:19.219734   26504 main.go:141] libmachine: (multinode-816061)     <bootmenu enable='no'/>
	I0907 00:03:19.219761   26504 main.go:141] libmachine: (multinode-816061)   </os>
	I0907 00:03:19.219775   26504 main.go:141] libmachine: (multinode-816061)   <devices>
	I0907 00:03:19.219790   26504 main.go:141] libmachine: (multinode-816061)     <disk type='file' device='cdrom'>
	I0907 00:03:19.219810   26504 main.go:141] libmachine: (multinode-816061)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/boot2docker.iso'/>
	I0907 00:03:19.219819   26504 main.go:141] libmachine: (multinode-816061)       <target dev='hdc' bus='scsi'/>
	I0907 00:03:19.219831   26504 main.go:141] libmachine: (multinode-816061)       <readonly/>
	I0907 00:03:19.219844   26504 main.go:141] libmachine: (multinode-816061)     </disk>
	I0907 00:03:19.219859   26504 main.go:141] libmachine: (multinode-816061)     <disk type='file' device='disk'>
	I0907 00:03:19.219877   26504 main.go:141] libmachine: (multinode-816061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:03:19.219897   26504 main.go:141] libmachine: (multinode-816061)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/multinode-816061.rawdisk'/>
	I0907 00:03:19.219906   26504 main.go:141] libmachine: (multinode-816061)       <target dev='hda' bus='virtio'/>
	I0907 00:03:19.219918   26504 main.go:141] libmachine: (multinode-816061)     </disk>
	I0907 00:03:19.219931   26504 main.go:141] libmachine: (multinode-816061)     <interface type='network'>
	I0907 00:03:19.219948   26504 main.go:141] libmachine: (multinode-816061)       <source network='mk-multinode-816061'/>
	I0907 00:03:19.219964   26504 main.go:141] libmachine: (multinode-816061)       <model type='virtio'/>
	I0907 00:03:19.219977   26504 main.go:141] libmachine: (multinode-816061)     </interface>
	I0907 00:03:19.219989   26504 main.go:141] libmachine: (multinode-816061)     <interface type='network'>
	I0907 00:03:19.219999   26504 main.go:141] libmachine: (multinode-816061)       <source network='default'/>
	I0907 00:03:19.220012   26504 main.go:141] libmachine: (multinode-816061)       <model type='virtio'/>
	I0907 00:03:19.220026   26504 main.go:141] libmachine: (multinode-816061)     </interface>
	I0907 00:03:19.220041   26504 main.go:141] libmachine: (multinode-816061)     <serial type='pty'>
	I0907 00:03:19.220055   26504 main.go:141] libmachine: (multinode-816061)       <target port='0'/>
	I0907 00:03:19.220066   26504 main.go:141] libmachine: (multinode-816061)     </serial>
	I0907 00:03:19.220078   26504 main.go:141] libmachine: (multinode-816061)     <console type='pty'>
	I0907 00:03:19.220088   26504 main.go:141] libmachine: (multinode-816061)       <target type='serial' port='0'/>
	I0907 00:03:19.220101   26504 main.go:141] libmachine: (multinode-816061)     </console>
	I0907 00:03:19.220118   26504 main.go:141] libmachine: (multinode-816061)     <rng model='virtio'>
	I0907 00:03:19.220132   26504 main.go:141] libmachine: (multinode-816061)       <backend model='random'>/dev/random</backend>
	I0907 00:03:19.220143   26504 main.go:141] libmachine: (multinode-816061)     </rng>
	I0907 00:03:19.220157   26504 main.go:141] libmachine: (multinode-816061)     
	I0907 00:03:19.220171   26504 main.go:141] libmachine: (multinode-816061)     
	I0907 00:03:19.220192   26504 main.go:141] libmachine: (multinode-816061)   </devices>
	I0907 00:03:19.220212   26504 main.go:141] libmachine: (multinode-816061) </domain>
	I0907 00:03:19.220236   26504 main.go:141] libmachine: (multinode-816061) 
	I0907 00:03:19.224592   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:b3:52:fa in network default
	I0907 00:03:19.225155   26504 main.go:141] libmachine: (multinode-816061) Ensuring networks are active...
	I0907 00:03:19.225176   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:19.225790   26504 main.go:141] libmachine: (multinode-816061) Ensuring network default is active
	I0907 00:03:19.226121   26504 main.go:141] libmachine: (multinode-816061) Ensuring network mk-multinode-816061 is active
	I0907 00:03:19.226570   26504 main.go:141] libmachine: (multinode-816061) Getting domain xml...
	I0907 00:03:19.227237   26504 main.go:141] libmachine: (multinode-816061) Creating domain...
	I0907 00:03:20.428387   26504 main.go:141] libmachine: (multinode-816061) Waiting to get IP...
	I0907 00:03:20.429300   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:20.429753   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:20.429808   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:20.429752   26527 retry.go:31] will retry after 192.994135ms: waiting for machine to come up
	I0907 00:03:20.624205   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:20.624689   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:20.624721   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:20.624638   26527 retry.go:31] will retry after 373.364709ms: waiting for machine to come up
	I0907 00:03:20.999331   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:20.999763   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:20.999793   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:20.999721   26527 retry.go:31] will retry after 444.353821ms: waiting for machine to come up
	I0907 00:03:21.445235   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:21.445621   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:21.445656   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:21.445575   26527 retry.go:31] will retry after 514.777518ms: waiting for machine to come up
	I0907 00:03:21.962231   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:21.962612   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:21.962638   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:21.962564   26527 retry.go:31] will retry after 761.218045ms: waiting for machine to come up
	I0907 00:03:22.725343   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:22.725764   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:22.725795   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:22.725715   26527 retry.go:31] will retry after 773.351089ms: waiting for machine to come up
	I0907 00:03:23.500141   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:23.500492   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:23.500517   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:23.500444   26527 retry.go:31] will retry after 790.305975ms: waiting for machine to come up
	I0907 00:03:24.291987   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:24.292437   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:24.292465   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:24.292374   26527 retry.go:31] will retry after 1.179691866s: waiting for machine to come up
	I0907 00:03:25.473263   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:25.473631   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:25.473660   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:25.473584   26527 retry.go:31] will retry after 1.687792769s: waiting for machine to come up
	I0907 00:03:27.163544   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:27.163978   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:27.164001   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:27.163935   26527 retry.go:31] will retry after 2.061624541s: waiting for machine to come up
	I0907 00:03:29.227275   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:29.227731   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:29.227763   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:29.227673   26527 retry.go:31] will retry after 2.558944856s: waiting for machine to come up
	I0907 00:03:31.788013   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:31.788446   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:31.788479   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:31.788399   26527 retry.go:31] will retry after 2.471178388s: waiting for machine to come up
	I0907 00:03:34.261480   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:34.261881   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:34.261899   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:34.261861   26527 retry.go:31] will retry after 3.454502375s: waiting for machine to come up
	I0907 00:03:37.717601   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:37.718094   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:03:37.718137   26504 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:03:37.718005   26527 retry.go:31] will retry after 4.968851527s: waiting for machine to come up
	I0907 00:03:42.691863   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.692232   26504 main.go:141] libmachine: (multinode-816061) Found IP for machine: 192.168.39.212
	I0907 00:03:42.692275   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has current primary IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.692285   26504 main.go:141] libmachine: (multinode-816061) Reserving static IP address...
	I0907 00:03:42.692734   26504 main.go:141] libmachine: (multinode-816061) DBG | unable to find host DHCP lease matching {name: "multinode-816061", mac: "52:54:00:ef:52:c5", ip: "192.168.39.212"} in network mk-multinode-816061
	I0907 00:03:42.764226   26504 main.go:141] libmachine: (multinode-816061) DBG | Getting to WaitForSSH function...
	I0907 00:03:42.764251   26504 main.go:141] libmachine: (multinode-816061) Reserved static IP address: 192.168.39.212
	I0907 00:03:42.764260   26504 main.go:141] libmachine: (multinode-816061) Waiting for SSH to be available...
	I0907 00:03:42.766537   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.767057   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:42.767101   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.767144   26504 main.go:141] libmachine: (multinode-816061) DBG | Using SSH client type: external
	I0907 00:03:42.767178   26504 main.go:141] libmachine: (multinode-816061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa (-rw-------)
	I0907 00:03:42.767212   26504 main.go:141] libmachine: (multinode-816061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:03:42.767240   26504 main.go:141] libmachine: (multinode-816061) DBG | About to run SSH command:
	I0907 00:03:42.767267   26504 main.go:141] libmachine: (multinode-816061) DBG | exit 0
	I0907 00:03:42.867286   26504 main.go:141] libmachine: (multinode-816061) DBG | SSH cmd err, output: <nil>: 
	I0907 00:03:42.867547   26504 main.go:141] libmachine: (multinode-816061) KVM machine creation complete!
	I0907 00:03:42.867834   26504 main.go:141] libmachine: (multinode-816061) Calling .GetConfigRaw
	I0907 00:03:42.868376   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:42.868598   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:42.868849   26504 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0907 00:03:42.868865   26504 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:03:42.870348   26504 main.go:141] libmachine: Detecting operating system of created instance...
	I0907 00:03:42.870368   26504 main.go:141] libmachine: Waiting for SSH to be available...
	I0907 00:03:42.870378   26504 main.go:141] libmachine: Getting to WaitForSSH function...
	I0907 00:03:42.870392   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:42.872684   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.873158   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:42.873194   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:42.873382   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:42.873595   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:42.873735   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:42.873887   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:42.874034   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:42.874661   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:42.874680   26504 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0907 00:03:43.006311   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:03:43.006336   26504 main.go:141] libmachine: Detecting the provisioner...
	I0907 00:03:43.006344   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.009326   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.009649   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.009676   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.009840   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:43.010046   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.010202   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.010330   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:43.010502   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:43.011107   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:43.011121   26504 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0907 00:03:43.147899   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0907 00:03:43.147997   26504 main.go:141] libmachine: found compatible host: buildroot
	I0907 00:03:43.148012   26504 main.go:141] libmachine: Provisioning with buildroot...
	I0907 00:03:43.148023   26504 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:03:43.148294   26504 buildroot.go:166] provisioning hostname "multinode-816061"
	I0907 00:03:43.148324   26504 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:03:43.148512   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.151165   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.151538   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.151574   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.151627   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:43.151825   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.151985   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.152109   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:43.152254   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:43.152655   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:43.152670   26504 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061 && echo "multinode-816061" | sudo tee /etc/hostname
	I0907 00:03:43.295202   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-816061
	
	I0907 00:03:43.295234   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.297826   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.298134   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.298168   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.298314   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:43.298514   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.298679   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.298842   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:43.298993   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:43.299373   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:43.299392   26504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-816061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-816061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-816061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:03:43.438994   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:03:43.439019   26504 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:03:43.439067   26504 buildroot.go:174] setting up certificates
	I0907 00:03:43.439076   26504 provision.go:83] configureAuth start
	I0907 00:03:43.439087   26504 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:03:43.439382   26504 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:03:43.442164   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.442490   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.442512   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.442752   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.445119   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.445436   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.445469   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.445617   26504 provision.go:138] copyHostCerts
	I0907 00:03:43.445657   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:03:43.445694   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:03:43.445714   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:03:43.445780   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:03:43.445867   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:03:43.445891   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:03:43.445899   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:03:43.445923   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:03:43.445981   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:03:43.446083   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:03:43.446099   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:03:43.446146   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:03:43.446227   26504 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.multinode-816061 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube multinode-816061]
	I0907 00:03:43.649121   26504 provision.go:172] copyRemoteCerts
	I0907 00:03:43.649176   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:03:43.649199   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.651688   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.652024   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.652046   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.652281   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:43.652486   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.652682   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:43.652838   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:03:43.748225   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0907 00:03:43.748311   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0907 00:03:43.773114   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0907 00:03:43.773190   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:03:43.796380   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0907 00:03:43.796452   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:03:43.819480   26504 provision.go:86] duration metric: configureAuth took 380.390445ms
	I0907 00:03:43.819528   26504 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:03:43.819733   26504 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:03:43.819817   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:43.822659   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.823020   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:43.823059   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:43.823186   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:43.823443   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.823605   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:43.823732   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:43.823882   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:43.824271   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:43.824286   26504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:03:44.148397   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:03:44.148436   26504 main.go:141] libmachine: Checking connection to Docker...
	I0907 00:03:44.148449   26504 main.go:141] libmachine: (multinode-816061) Calling .GetURL
	I0907 00:03:44.149681   26504 main.go:141] libmachine: (multinode-816061) DBG | Using libvirt version 6000000
	I0907 00:03:44.151800   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.152187   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.152211   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.152406   26504 main.go:141] libmachine: Docker is up and running!
	I0907 00:03:44.152440   26504 main.go:141] libmachine: Reticulating splines...
	I0907 00:03:44.152448   26504 client.go:171] LocalClient.Create took 25.398823899s
	I0907 00:03:44.152471   26504 start.go:167] duration metric: libmachine.API.Create for "multinode-816061" took 25.398880115s
	I0907 00:03:44.152483   26504 start.go:300] post-start starting for "multinode-816061" (driver="kvm2")
	I0907 00:03:44.152496   26504 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:03:44.152521   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:44.152764   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:03:44.152785   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:44.155004   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.155316   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.155344   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.155463   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:44.155629   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:44.155748   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:44.155885   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:03:44.253592   26504 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:03:44.258234   26504 command_runner.go:130] > NAME=Buildroot
	I0907 00:03:44.258252   26504 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0907 00:03:44.258258   26504 command_runner.go:130] > ID=buildroot
	I0907 00:03:44.258265   26504 command_runner.go:130] > VERSION_ID=2021.02.12
	I0907 00:03:44.258272   26504 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0907 00:03:44.258308   26504 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:03:44.258323   26504 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:03:44.258389   26504 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:03:44.258484   26504 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:03:44.258495   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0907 00:03:44.258601   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:03:44.267884   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:03:44.289373   26504 start.go:303] post-start completed in 136.876255ms
	I0907 00:03:44.289420   26504 main.go:141] libmachine: (multinode-816061) Calling .GetConfigRaw
	I0907 00:03:44.290023   26504 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:03:44.292649   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.292977   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.293013   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.293283   26504 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:03:44.293439   26504 start.go:128] duration metric: createHost completed in 25.558552973s
	I0907 00:03:44.293458   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:44.295866   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.296207   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.296241   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.296336   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:44.296526   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:44.296694   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:44.296819   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:44.296932   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:03:44.297321   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:03:44.297332   26504 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:03:44.431669   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694045024.402398357
	
	I0907 00:03:44.431690   26504 fix.go:206] guest clock: 1694045024.402398357
	I0907 00:03:44.431700   26504 fix.go:219] Guest: 2023-09-07 00:03:44.402398357 +0000 UTC Remote: 2023-09-07 00:03:44.293448955 +0000 UTC m=+25.664167863 (delta=108.949402ms)
	I0907 00:03:44.431723   26504 fix.go:190] guest clock delta is within tolerance: 108.949402ms
	I0907 00:03:44.431729   26504 start.go:83] releasing machines lock for "multinode-816061", held for 25.696963491s
	I0907 00:03:44.431755   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:44.432034   26504 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:03:44.434377   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.434694   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.434716   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.434902   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:44.435442   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:44.435635   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:03:44.435712   26504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:03:44.435757   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:44.435811   26504 ssh_runner.go:195] Run: cat /version.json
	I0907 00:03:44.435835   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:03:44.438386   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.438512   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.438735   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.438765   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.438876   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:44.439021   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:44.439039   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:44.439046   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:44.439206   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:03:44.439225   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:44.439405   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:03:44.439413   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:03:44.439564   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:03:44.439668   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:03:44.552994   26504 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0907 00:03:44.553059   26504 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0907 00:03:44.553144   26504 ssh_runner.go:195] Run: systemctl --version
	I0907 00:03:44.558703   26504 command_runner.go:130] > systemd 247 (247)
	I0907 00:03:44.558747   26504 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0907 00:03:44.558820   26504 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:03:44.721097   26504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:03:44.727061   26504 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0907 00:03:44.727144   26504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:03:44.727224   26504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:03:44.742003   26504 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0907 00:03:44.742338   26504 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:03:44.742359   26504 start.go:466] detecting cgroup driver to use...
	I0907 00:03:44.742425   26504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:03:44.759109   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:03:44.771633   26504 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:03:44.771704   26504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:03:44.784641   26504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:03:44.797476   26504 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:03:44.901058   26504 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0907 00:03:44.901139   26504 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:03:44.915562   26504 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0907 00:03:45.022910   26504 docker.go:212] disabling docker service ...
	I0907 00:03:45.022978   26504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:03:45.037161   26504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:03:45.049060   26504 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0907 00:03:45.049399   26504 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:03:45.063337   26504 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0907 00:03:45.151336   26504 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:03:45.164732   26504 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0907 00:03:45.165038   26504 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0907 00:03:45.258100   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:03:45.271301   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:03:45.288427   26504 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0907 00:03:45.288478   26504 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:03:45.288530   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:03:45.297640   26504 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:03:45.297698   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:03:45.306826   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:03:45.316164   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:03:45.325303   26504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:03:45.334608   26504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:03:45.344106   26504 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:03:45.344357   26504 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:03:45.344426   26504 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:03:45.356263   26504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:03:45.365803   26504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:03:45.486267   26504 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:03:45.657278   26504 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:03:45.657343   26504 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:03:45.662134   26504 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0907 00:03:45.662162   26504 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0907 00:03:45.662177   26504 command_runner.go:130] > Device: 16h/22d	Inode: 726         Links: 1
	I0907 00:03:45.662187   26504 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:03:45.662195   26504 command_runner.go:130] > Access: 2023-09-07 00:03:45.611025434 +0000
	I0907 00:03:45.662216   26504 command_runner.go:130] > Modify: 2023-09-07 00:03:45.611025434 +0000
	I0907 00:03:45.662224   26504 command_runner.go:130] > Change: 2023-09-07 00:03:45.611025434 +0000
	I0907 00:03:45.662228   26504 command_runner.go:130] >  Birth: -
	I0907 00:03:45.662540   26504 start.go:534] Will wait 60s for crictl version
	I0907 00:03:45.662636   26504 ssh_runner.go:195] Run: which crictl
	I0907 00:03:45.666498   26504 command_runner.go:130] > /usr/bin/crictl
	I0907 00:03:45.666571   26504 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:03:45.698289   26504 command_runner.go:130] > Version:  0.1.0
	I0907 00:03:45.698309   26504 command_runner.go:130] > RuntimeName:  cri-o
	I0907 00:03:45.698314   26504 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0907 00:03:45.698323   26504 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0907 00:03:45.698516   26504 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:03:45.698590   26504 ssh_runner.go:195] Run: crio --version
	I0907 00:03:45.746649   26504 command_runner.go:130] > crio version 1.24.1
	I0907 00:03:45.746669   26504 command_runner.go:130] > Version:          1.24.1
	I0907 00:03:45.746676   26504 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:03:45.746681   26504 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:03:45.746690   26504 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:03:45.746694   26504 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:03:45.746698   26504 command_runner.go:130] > Compiler:         gc
	I0907 00:03:45.746703   26504 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:03:45.746709   26504 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:03:45.746717   26504 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:03:45.746721   26504 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:03:45.746726   26504 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:03:45.746825   26504 ssh_runner.go:195] Run: crio --version
	I0907 00:03:45.794089   26504 command_runner.go:130] > crio version 1.24.1
	I0907 00:03:45.794116   26504 command_runner.go:130] > Version:          1.24.1
	I0907 00:03:45.794127   26504 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:03:45.794147   26504 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:03:45.794156   26504 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:03:45.794162   26504 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:03:45.794168   26504 command_runner.go:130] > Compiler:         gc
	I0907 00:03:45.794175   26504 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:03:45.794183   26504 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:03:45.794194   26504 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:03:45.794201   26504 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:03:45.794211   26504 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:03:45.797300   26504 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:03:45.798823   26504 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:03:45.801555   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:45.801942   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:03:45.801986   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:03:45.802115   26504 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:03:45.806377   26504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:03:45.818561   26504 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:03:45.818634   26504 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:03:45.845885   26504 command_runner.go:130] > {
	I0907 00:03:45.845911   26504 command_runner.go:130] >   "images": [
	I0907 00:03:45.845917   26504 command_runner.go:130] >   ]
	I0907 00:03:45.845922   26504 command_runner.go:130] > }
	I0907 00:03:45.847409   26504 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:03:45.847473   26504 ssh_runner.go:195] Run: which lz4
	I0907 00:03:45.851361   26504 command_runner.go:130] > /usr/bin/lz4
	I0907 00:03:45.851513   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0907 00:03:45.851606   26504 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:03:45.855762   26504 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:03:45.856018   26504 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:03:45.856050   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:03:47.708554   26504 crio.go:444] Took 1.856976 seconds to copy over tarball
	I0907 00:03:47.708629   26504 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:03:50.664659   26504 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.956007458s)
	I0907 00:03:50.664682   26504 crio.go:451] Took 2.956103 seconds to extract the tarball
	I0907 00:03:50.664690   26504 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:03:50.705604   26504 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:03:50.764631   26504 command_runner.go:130] > {
	I0907 00:03:50.764656   26504 command_runner.go:130] >   "images": [
	I0907 00:03:50.764664   26504 command_runner.go:130] >     {
	I0907 00:03:50.764677   26504 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0907 00:03:50.764684   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.764693   26504 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0907 00:03:50.764699   26504 command_runner.go:130] >       ],
	I0907 00:03:50.764705   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.764719   26504 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0907 00:03:50.764736   26504 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0907 00:03:50.764742   26504 command_runner.go:130] >       ],
	I0907 00:03:50.764753   26504 command_runner.go:130] >       "size": "65249302",
	I0907 00:03:50.764759   26504 command_runner.go:130] >       "uid": null,
	I0907 00:03:50.764766   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.764775   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.764784   26504 command_runner.go:130] >     },
	I0907 00:03:50.764790   26504 command_runner.go:130] >     {
	I0907 00:03:50.764801   26504 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0907 00:03:50.764811   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.764820   26504 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0907 00:03:50.764827   26504 command_runner.go:130] >       ],
	I0907 00:03:50.764836   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.764847   26504 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0907 00:03:50.764862   26504 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0907 00:03:50.764871   26504 command_runner.go:130] >       ],
	I0907 00:03:50.764878   26504 command_runner.go:130] >       "size": "31470524",
	I0907 00:03:50.764888   26504 command_runner.go:130] >       "uid": null,
	I0907 00:03:50.764899   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.764909   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.764915   26504 command_runner.go:130] >     },
	I0907 00:03:50.764923   26504 command_runner.go:130] >     {
	I0907 00:03:50.764933   26504 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0907 00:03:50.764942   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.764950   26504 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0907 00:03:50.764959   26504 command_runner.go:130] >       ],
	I0907 00:03:50.764966   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.764980   26504 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0907 00:03:50.764994   26504 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0907 00:03:50.765008   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765015   26504 command_runner.go:130] >       "size": "53621675",
	I0907 00:03:50.765019   26504 command_runner.go:130] >       "uid": null,
	I0907 00:03:50.765024   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765029   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765035   26504 command_runner.go:130] >     },
	I0907 00:03:50.765038   26504 command_runner.go:130] >     {
	I0907 00:03:50.765047   26504 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0907 00:03:50.765051   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765056   26504 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0907 00:03:50.765062   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765066   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765076   26504 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0907 00:03:50.765089   26504 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0907 00:03:50.765098   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765105   26504 command_runner.go:130] >       "size": "295456551",
	I0907 00:03:50.765134   26504 command_runner.go:130] >       "uid": {
	I0907 00:03:50.765145   26504 command_runner.go:130] >         "value": "0"
	I0907 00:03:50.765156   26504 command_runner.go:130] >       },
	I0907 00:03:50.765162   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765172   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765178   26504 command_runner.go:130] >     },
	I0907 00:03:50.765185   26504 command_runner.go:130] >     {
	I0907 00:03:50.765195   26504 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0907 00:03:50.765204   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765213   26504 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0907 00:03:50.765222   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765229   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765244   26504 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0907 00:03:50.765259   26504 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0907 00:03:50.765268   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765275   26504 command_runner.go:130] >       "size": "126972880",
	I0907 00:03:50.765284   26504 command_runner.go:130] >       "uid": {
	I0907 00:03:50.765290   26504 command_runner.go:130] >         "value": "0"
	I0907 00:03:50.765300   26504 command_runner.go:130] >       },
	I0907 00:03:50.765307   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765316   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765321   26504 command_runner.go:130] >     },
	I0907 00:03:50.765325   26504 command_runner.go:130] >     {
	I0907 00:03:50.765332   26504 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0907 00:03:50.765338   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765344   26504 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0907 00:03:50.765350   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765353   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765361   26504 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0907 00:03:50.765370   26504 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0907 00:03:50.765374   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765378   26504 command_runner.go:130] >       "size": "123163446",
	I0907 00:03:50.765385   26504 command_runner.go:130] >       "uid": {
	I0907 00:03:50.765389   26504 command_runner.go:130] >         "value": "0"
	I0907 00:03:50.765393   26504 command_runner.go:130] >       },
	I0907 00:03:50.765397   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765402   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765406   26504 command_runner.go:130] >     },
	I0907 00:03:50.765412   26504 command_runner.go:130] >     {
	I0907 00:03:50.765419   26504 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0907 00:03:50.765425   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765430   26504 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0907 00:03:50.765436   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765439   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765446   26504 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0907 00:03:50.765456   26504 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0907 00:03:50.765459   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765463   26504 command_runner.go:130] >       "size": "74680215",
	I0907 00:03:50.765470   26504 command_runner.go:130] >       "uid": null,
	I0907 00:03:50.765473   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765477   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765481   26504 command_runner.go:130] >     },
	I0907 00:03:50.765484   26504 command_runner.go:130] >     {
	I0907 00:03:50.765490   26504 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0907 00:03:50.765499   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765504   26504 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0907 00:03:50.765510   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765514   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765529   26504 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0907 00:03:50.765550   26504 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0907 00:03:50.765557   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765561   26504 command_runner.go:130] >       "size": "61477686",
	I0907 00:03:50.765565   26504 command_runner.go:130] >       "uid": {
	I0907 00:03:50.765571   26504 command_runner.go:130] >         "value": "0"
	I0907 00:03:50.765575   26504 command_runner.go:130] >       },
	I0907 00:03:50.765579   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765584   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765588   26504 command_runner.go:130] >     },
	I0907 00:03:50.765594   26504 command_runner.go:130] >     {
	I0907 00:03:50.765600   26504 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0907 00:03:50.765607   26504 command_runner.go:130] >       "repoTags": [
	I0907 00:03:50.765611   26504 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0907 00:03:50.765617   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765621   26504 command_runner.go:130] >       "repoDigests": [
	I0907 00:03:50.765628   26504 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0907 00:03:50.765638   26504 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0907 00:03:50.765641   26504 command_runner.go:130] >       ],
	I0907 00:03:50.765645   26504 command_runner.go:130] >       "size": "750414",
	I0907 00:03:50.765651   26504 command_runner.go:130] >       "uid": {
	I0907 00:03:50.765661   26504 command_runner.go:130] >         "value": "65535"
	I0907 00:03:50.765666   26504 command_runner.go:130] >       },
	I0907 00:03:50.765676   26504 command_runner.go:130] >       "username": "",
	I0907 00:03:50.765686   26504 command_runner.go:130] >       "spec": null
	I0907 00:03:50.765692   26504 command_runner.go:130] >     }
	I0907 00:03:50.765700   26504 command_runner.go:130] >   ]
	I0907 00:03:50.765706   26504 command_runner.go:130] > }
	I0907 00:03:50.765830   26504 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:03:50.765842   26504 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:03:50.765897   26504 ssh_runner.go:195] Run: crio config
	I0907 00:03:50.820805   26504 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0907 00:03:50.820837   26504 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0907 00:03:50.820850   26504 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0907 00:03:50.820860   26504 command_runner.go:130] > #
	I0907 00:03:50.820871   26504 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0907 00:03:50.820894   26504 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0907 00:03:50.820908   26504 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0907 00:03:50.820919   26504 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0907 00:03:50.820928   26504 command_runner.go:130] > # reload'.
	I0907 00:03:50.820940   26504 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0907 00:03:50.820953   26504 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0907 00:03:50.820964   26504 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0907 00:03:50.820976   26504 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0907 00:03:50.820984   26504 command_runner.go:130] > [crio]
	I0907 00:03:50.820997   26504 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0907 00:03:50.821008   26504 command_runner.go:130] > # containers images, in this directory.
	I0907 00:03:50.821039   26504 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0907 00:03:50.821056   26504 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0907 00:03:50.821331   26504 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0907 00:03:50.821347   26504 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0907 00:03:50.821357   26504 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0907 00:03:50.821610   26504 command_runner.go:130] > storage_driver = "overlay"
	I0907 00:03:50.821631   26504 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0907 00:03:50.821642   26504 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0907 00:03:50.821653   26504 command_runner.go:130] > storage_option = [
	I0907 00:03:50.821985   26504 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0907 00:03:50.822025   26504 command_runner.go:130] > ]
	I0907 00:03:50.822042   26504 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0907 00:03:50.822054   26504 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0907 00:03:50.822546   26504 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0907 00:03:50.822562   26504 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0907 00:03:50.822572   26504 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0907 00:03:50.822593   26504 command_runner.go:130] > # always happen on a node reboot
	I0907 00:03:50.823304   26504 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0907 00:03:50.823321   26504 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0907 00:03:50.823331   26504 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0907 00:03:50.823357   26504 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0907 00:03:50.823972   26504 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0907 00:03:50.823994   26504 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0907 00:03:50.824007   26504 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0907 00:03:50.824015   26504 command_runner.go:130] > # internal_wipe = true
	I0907 00:03:50.824022   26504 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0907 00:03:50.824032   26504 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0907 00:03:50.824041   26504 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0907 00:03:50.824065   26504 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0907 00:03:50.824078   26504 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0907 00:03:50.824087   26504 command_runner.go:130] > [crio.api]
	I0907 00:03:50.824096   26504 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0907 00:03:50.824107   26504 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0907 00:03:50.824115   26504 command_runner.go:130] > # IP address on which the stream server will listen.
	I0907 00:03:50.824126   26504 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0907 00:03:50.824137   26504 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0907 00:03:50.824149   26504 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0907 00:03:50.824158   26504 command_runner.go:130] > # stream_port = "0"
	I0907 00:03:50.824169   26504 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0907 00:03:50.824180   26504 command_runner.go:130] > # stream_enable_tls = false
	I0907 00:03:50.824190   26504 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0907 00:03:50.824199   26504 command_runner.go:130] > # stream_idle_timeout = ""
	I0907 00:03:50.824210   26504 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0907 00:03:50.824223   26504 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0907 00:03:50.824232   26504 command_runner.go:130] > # minutes.
	I0907 00:03:50.824239   26504 command_runner.go:130] > # stream_tls_cert = ""
	I0907 00:03:50.824264   26504 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0907 00:03:50.824281   26504 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0907 00:03:50.824287   26504 command_runner.go:130] > # stream_tls_key = ""
	I0907 00:03:50.824297   26504 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0907 00:03:50.824311   26504 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0907 00:03:50.824323   26504 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0907 00:03:50.824333   26504 command_runner.go:130] > # stream_tls_ca = ""
	I0907 00:03:50.824347   26504 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:03:50.824357   26504 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0907 00:03:50.824369   26504 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:03:50.824379   26504 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0907 00:03:50.824403   26504 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0907 00:03:50.824415   26504 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0907 00:03:50.824423   26504 command_runner.go:130] > [crio.runtime]
	I0907 00:03:50.824436   26504 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0907 00:03:50.824447   26504 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0907 00:03:50.824457   26504 command_runner.go:130] > # "nofile=1024:2048"
	I0907 00:03:50.824470   26504 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0907 00:03:50.824483   26504 command_runner.go:130] > # default_ulimits = [
	I0907 00:03:50.824489   26504 command_runner.go:130] > # ]
	I0907 00:03:50.824502   26504 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0907 00:03:50.824511   26504 command_runner.go:130] > # no_pivot = false
	I0907 00:03:50.824524   26504 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0907 00:03:50.824537   26504 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0907 00:03:50.824548   26504 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0907 00:03:50.824560   26504 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0907 00:03:50.824572   26504 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0907 00:03:50.824586   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:03:50.824597   26504 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0907 00:03:50.824607   26504 command_runner.go:130] > # Cgroup setting for conmon
	I0907 00:03:50.824619   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0907 00:03:50.824628   26504 command_runner.go:130] > conmon_cgroup = "pod"
	I0907 00:03:50.824638   26504 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0907 00:03:50.824650   26504 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0907 00:03:50.824664   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:03:50.824673   26504 command_runner.go:130] > conmon_env = [
	I0907 00:03:50.824713   26504 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0907 00:03:50.824723   26504 command_runner.go:130] > ]
	I0907 00:03:50.824732   26504 command_runner.go:130] > # Additional environment variables to set for all the
	I0907 00:03:50.824740   26504 command_runner.go:130] > # containers. These are overridden if set in the
	I0907 00:03:50.824752   26504 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0907 00:03:50.824762   26504 command_runner.go:130] > # default_env = [
	I0907 00:03:50.824767   26504 command_runner.go:130] > # ]
	I0907 00:03:50.824780   26504 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0907 00:03:50.825079   26504 command_runner.go:130] > # selinux = false
	I0907 00:03:50.825097   26504 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0907 00:03:50.825106   26504 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0907 00:03:50.825114   26504 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0907 00:03:50.825965   26504 command_runner.go:130] > # seccomp_profile = ""
	I0907 00:03:50.825984   26504 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0907 00:03:50.825994   26504 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0907 00:03:50.826006   26504 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0907 00:03:50.826020   26504 command_runner.go:130] > # which might increase security.
	I0907 00:03:50.826028   26504 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0907 00:03:50.826044   26504 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0907 00:03:50.826058   26504 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0907 00:03:50.826076   26504 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0907 00:03:50.826090   26504 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0907 00:03:50.826102   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:03:50.826111   26504 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0907 00:03:50.826125   26504 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0907 00:03:50.826137   26504 command_runner.go:130] > # the cgroup blockio controller.
	I0907 00:03:50.826147   26504 command_runner.go:130] > # blockio_config_file = ""
	I0907 00:03:50.826161   26504 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0907 00:03:50.826170   26504 command_runner.go:130] > # irqbalance daemon.
	I0907 00:03:50.826179   26504 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0907 00:03:50.826194   26504 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0907 00:03:50.826202   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:03:50.826212   26504 command_runner.go:130] > # rdt_config_file = ""
	I0907 00:03:50.826220   26504 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0907 00:03:50.826231   26504 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0907 00:03:50.826245   26504 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0907 00:03:50.826255   26504 command_runner.go:130] > # separate_pull_cgroup = ""
	I0907 00:03:50.826268   26504 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0907 00:03:50.826281   26504 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0907 00:03:50.826290   26504 command_runner.go:130] > # will be added.
	I0907 00:03:50.826297   26504 command_runner.go:130] > # default_capabilities = [
	I0907 00:03:50.826306   26504 command_runner.go:130] > # 	"CHOWN",
	I0907 00:03:50.826313   26504 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0907 00:03:50.826323   26504 command_runner.go:130] > # 	"FSETID",
	I0907 00:03:50.826330   26504 command_runner.go:130] > # 	"FOWNER",
	I0907 00:03:50.826339   26504 command_runner.go:130] > # 	"SETGID",
	I0907 00:03:50.826345   26504 command_runner.go:130] > # 	"SETUID",
	I0907 00:03:50.826355   26504 command_runner.go:130] > # 	"SETPCAP",
	I0907 00:03:50.826361   26504 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0907 00:03:50.826375   26504 command_runner.go:130] > # 	"KILL",
	I0907 00:03:50.826384   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826394   26504 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0907 00:03:50.826408   26504 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:03:50.826415   26504 command_runner.go:130] > # default_sysctls = [
	I0907 00:03:50.826425   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826433   26504 command_runner.go:130] > # List of devices on the host that a
	I0907 00:03:50.826446   26504 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0907 00:03:50.826455   26504 command_runner.go:130] > # allowed_devices = [
	I0907 00:03:50.826462   26504 command_runner.go:130] > # 	"/dev/fuse",
	I0907 00:03:50.826472   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826480   26504 command_runner.go:130] > # List of additional devices. specified as
	I0907 00:03:50.826495   26504 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0907 00:03:50.826507   26504 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0907 00:03:50.826532   26504 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:03:50.826545   26504 command_runner.go:130] > # additional_devices = [
	I0907 00:03:50.826551   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826560   26504 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0907 00:03:50.826570   26504 command_runner.go:130] > # cdi_spec_dirs = [
	I0907 00:03:50.826577   26504 command_runner.go:130] > # 	"/etc/cdi",
	I0907 00:03:50.826585   26504 command_runner.go:130] > # 	"/var/run/cdi",
	I0907 00:03:50.826592   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826602   26504 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0907 00:03:50.826613   26504 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0907 00:03:50.826621   26504 command_runner.go:130] > # Defaults to false.
	I0907 00:03:50.826630   26504 command_runner.go:130] > # device_ownership_from_security_context = false
	I0907 00:03:50.826645   26504 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0907 00:03:50.826661   26504 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0907 00:03:50.826670   26504 command_runner.go:130] > # hooks_dir = [
	I0907 00:03:50.826678   26504 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0907 00:03:50.826686   26504 command_runner.go:130] > # ]
	I0907 00:03:50.826696   26504 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0907 00:03:50.826710   26504 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0907 00:03:50.826723   26504 command_runner.go:130] > # its default mounts from the following two files:
	I0907 00:03:50.826732   26504 command_runner.go:130] > #
	I0907 00:03:50.826742   26504 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0907 00:03:50.826756   26504 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0907 00:03:50.826768   26504 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0907 00:03:50.826784   26504 command_runner.go:130] > #
	I0907 00:03:50.826795   26504 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0907 00:03:50.826810   26504 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0907 00:03:50.826825   26504 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0907 00:03:50.826866   26504 command_runner.go:130] > #      only add mounts it finds in this file.
	I0907 00:03:50.826877   26504 command_runner.go:130] > #
	I0907 00:03:50.826884   26504 command_runner.go:130] > # default_mounts_file = ""
	I0907 00:03:50.826894   26504 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0907 00:03:50.826908   26504 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0907 00:03:50.826919   26504 command_runner.go:130] > pids_limit = 1024
	I0907 00:03:50.826931   26504 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0907 00:03:50.826946   26504 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0907 00:03:50.826961   26504 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0907 00:03:50.826977   26504 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0907 00:03:50.826986   26504 command_runner.go:130] > # log_size_max = -1
	I0907 00:03:50.826997   26504 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0907 00:03:50.827008   26504 command_runner.go:130] > # log_to_journald = false
	I0907 00:03:50.827019   26504 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0907 00:03:50.827030   26504 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0907 00:03:50.827042   26504 command_runner.go:130] > # Path to directory for container attach sockets.
	I0907 00:03:50.827054   26504 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0907 00:03:50.827066   26504 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0907 00:03:50.827083   26504 command_runner.go:130] > # bind_mount_prefix = ""
	I0907 00:03:50.827096   26504 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0907 00:03:50.827103   26504 command_runner.go:130] > # read_only = false
	I0907 00:03:50.827117   26504 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0907 00:03:50.827131   26504 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0907 00:03:50.827139   26504 command_runner.go:130] > # live configuration reload.
	I0907 00:03:50.827149   26504 command_runner.go:130] > # log_level = "info"
	I0907 00:03:50.827158   26504 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0907 00:03:50.827171   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:03:50.827180   26504 command_runner.go:130] > # log_filter = ""
	I0907 00:03:50.827190   26504 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0907 00:03:50.827203   26504 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0907 00:03:50.827214   26504 command_runner.go:130] > # separated by comma.
	I0907 00:03:50.827221   26504 command_runner.go:130] > # uid_mappings = ""
	I0907 00:03:50.827242   26504 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0907 00:03:50.827255   26504 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0907 00:03:50.827265   26504 command_runner.go:130] > # separated by comma.
	I0907 00:03:50.827276   26504 command_runner.go:130] > # gid_mappings = ""
	I0907 00:03:50.827290   26504 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0907 00:03:50.827304   26504 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:03:50.827322   26504 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:03:50.827332   26504 command_runner.go:130] > # minimum_mappable_uid = -1
	I0907 00:03:50.827346   26504 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0907 00:03:50.827358   26504 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:03:50.827366   26504 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:03:50.827374   26504 command_runner.go:130] > # minimum_mappable_gid = -1
	I0907 00:03:50.827387   26504 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0907 00:03:50.827401   26504 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0907 00:03:50.827414   26504 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0907 00:03:50.827422   26504 command_runner.go:130] > # ctr_stop_timeout = 30
	I0907 00:03:50.827434   26504 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0907 00:03:50.827445   26504 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0907 00:03:50.827451   26504 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0907 00:03:50.827459   26504 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0907 00:03:50.827470   26504 command_runner.go:130] > drop_infra_ctr = false
	I0907 00:03:50.827482   26504 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0907 00:03:50.827495   26504 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0907 00:03:50.827510   26504 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0907 00:03:50.827520   26504 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0907 00:03:50.827530   26504 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0907 00:03:50.827542   26504 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0907 00:03:50.827553   26504 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0907 00:03:50.827568   26504 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0907 00:03:50.827577   26504 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0907 00:03:50.827590   26504 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0907 00:03:50.827604   26504 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0907 00:03:50.827618   26504 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0907 00:03:50.827626   26504 command_runner.go:130] > # default_runtime = "runc"
	I0907 00:03:50.827637   26504 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0907 00:03:50.827653   26504 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0907 00:03:50.827670   26504 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0907 00:03:50.827681   26504 command_runner.go:130] > # creation as a file is not desired either.
	I0907 00:03:50.827691   26504 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0907 00:03:50.827733   26504 command_runner.go:130] > # the hostname is being managed dynamically.
	I0907 00:03:50.827748   26504 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0907 00:03:50.827756   26504 command_runner.go:130] > # ]
	I0907 00:03:50.827767   26504 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0907 00:03:50.827778   26504 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0907 00:03:50.827786   26504 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0907 00:03:50.827800   26504 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0907 00:03:50.827809   26504 command_runner.go:130] > #
	I0907 00:03:50.827818   26504 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0907 00:03:50.827829   26504 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0907 00:03:50.827839   26504 command_runner.go:130] > #  runtime_type = "oci"
	I0907 00:03:50.827847   26504 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0907 00:03:50.827857   26504 command_runner.go:130] > #  privileged_without_host_devices = false
	I0907 00:03:50.827865   26504 command_runner.go:130] > #  allowed_annotations = []
	I0907 00:03:50.827868   26504 command_runner.go:130] > # Where:
	I0907 00:03:50.827879   26504 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0907 00:03:50.827893   26504 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0907 00:03:50.827907   26504 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0907 00:03:50.827920   26504 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0907 00:03:50.827929   26504 command_runner.go:130] > #   in $PATH.
	I0907 00:03:50.827939   26504 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0907 00:03:50.827956   26504 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0907 00:03:50.827970   26504 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0907 00:03:50.827980   26504 command_runner.go:130] > #   state.
	I0907 00:03:50.827991   26504 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0907 00:03:50.828003   26504 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0907 00:03:50.828017   26504 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0907 00:03:50.828031   26504 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0907 00:03:50.828058   26504 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0907 00:03:50.828077   26504 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0907 00:03:50.828085   26504 command_runner.go:130] > #   The currently recognized values are:
	I0907 00:03:50.828092   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0907 00:03:50.828101   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0907 00:03:50.828108   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0907 00:03:50.828118   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0907 00:03:50.828133   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0907 00:03:50.828147   26504 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0907 00:03:50.828164   26504 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0907 00:03:50.828179   26504 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0907 00:03:50.828191   26504 command_runner.go:130] > #   should be moved to the container's cgroup
	I0907 00:03:50.828201   26504 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0907 00:03:50.828212   26504 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0907 00:03:50.828219   26504 command_runner.go:130] > runtime_type = "oci"
	I0907 00:03:50.828230   26504 command_runner.go:130] > runtime_root = "/run/runc"
	I0907 00:03:50.828254   26504 command_runner.go:130] > runtime_config_path = ""
	I0907 00:03:50.828264   26504 command_runner.go:130] > monitor_path = ""
	I0907 00:03:50.828271   26504 command_runner.go:130] > monitor_cgroup = ""
	I0907 00:03:50.828281   26504 command_runner.go:130] > monitor_exec_cgroup = ""
	I0907 00:03:50.828294   26504 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0907 00:03:50.828304   26504 command_runner.go:130] > # running containers
	I0907 00:03:50.828311   26504 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0907 00:03:50.828325   26504 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0907 00:03:50.828369   26504 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0907 00:03:50.828382   26504 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0907 00:03:50.828394   26504 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0907 00:03:50.828405   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0907 00:03:50.828417   26504 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0907 00:03:50.828424   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0907 00:03:50.828436   26504 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0907 00:03:50.828444   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0907 00:03:50.828459   26504 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0907 00:03:50.828471   26504 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0907 00:03:50.828484   26504 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0907 00:03:50.828500   26504 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0907 00:03:50.828519   26504 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0907 00:03:50.828532   26504 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0907 00:03:50.828548   26504 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0907 00:03:50.828565   26504 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0907 00:03:50.828577   26504 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0907 00:03:50.828592   26504 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0907 00:03:50.828600   26504 command_runner.go:130] > # Example:
	I0907 00:03:50.828608   26504 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0907 00:03:50.828620   26504 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0907 00:03:50.828632   26504 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0907 00:03:50.828645   26504 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0907 00:03:50.828654   26504 command_runner.go:130] > # cpuset = 0
	I0907 00:03:50.828661   26504 command_runner.go:130] > # cpushares = "0-1"
	I0907 00:03:50.828670   26504 command_runner.go:130] > # Where:
	I0907 00:03:50.828678   26504 command_runner.go:130] > # The workload name is workload-type.
	I0907 00:03:50.828689   26504 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0907 00:03:50.828700   26504 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0907 00:03:50.828710   26504 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0907 00:03:50.828724   26504 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0907 00:03:50.828737   26504 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0907 00:03:50.828743   26504 command_runner.go:130] > # 
	I0907 00:03:50.828756   26504 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0907 00:03:50.828766   26504 command_runner.go:130] > #
	I0907 00:03:50.828776   26504 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0907 00:03:50.828789   26504 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0907 00:03:50.828803   26504 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0907 00:03:50.828818   26504 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0907 00:03:50.828830   26504 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0907 00:03:50.828836   26504 command_runner.go:130] > [crio.image]
	I0907 00:03:50.828850   26504 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0907 00:03:50.828861   26504 command_runner.go:130] > # default_transport = "docker://"
	I0907 00:03:50.828874   26504 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0907 00:03:50.828888   26504 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:03:50.828899   26504 command_runner.go:130] > # global_auth_file = ""
	I0907 00:03:50.828921   26504 command_runner.go:130] > # The image used to instantiate infra containers.
	I0907 00:03:50.828932   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:03:50.828941   26504 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0907 00:03:50.828954   26504 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0907 00:03:50.828963   26504 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:03:50.828972   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:03:50.828983   26504 command_runner.go:130] > # pause_image_auth_file = ""
	I0907 00:03:50.828993   26504 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0907 00:03:50.829007   26504 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0907 00:03:50.829020   26504 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0907 00:03:50.829033   26504 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0907 00:03:50.829044   26504 command_runner.go:130] > # pause_command = "/pause"
	I0907 00:03:50.829052   26504 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0907 00:03:50.829065   26504 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0907 00:03:50.829081   26504 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0907 00:03:50.829092   26504 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0907 00:03:50.829122   26504 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0907 00:03:50.829130   26504 command_runner.go:130] > # signature_policy = ""
	I0907 00:03:50.829140   26504 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0907 00:03:50.829150   26504 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0907 00:03:50.829156   26504 command_runner.go:130] > # changing them here.
	I0907 00:03:50.829163   26504 command_runner.go:130] > # insecure_registries = [
	I0907 00:03:50.829169   26504 command_runner.go:130] > # ]
	I0907 00:03:50.829180   26504 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0907 00:03:50.829188   26504 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0907 00:03:50.829195   26504 command_runner.go:130] > # image_volumes = "mkdir"
	I0907 00:03:50.829206   26504 command_runner.go:130] > # Temporary directory to use for storing big files
	I0907 00:03:50.829213   26504 command_runner.go:130] > # big_files_temporary_dir = ""
	I0907 00:03:50.829221   26504 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0907 00:03:50.829225   26504 command_runner.go:130] > # CNI plugins.
	I0907 00:03:50.829230   26504 command_runner.go:130] > [crio.network]
	I0907 00:03:50.829240   26504 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0907 00:03:50.829250   26504 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0907 00:03:50.829257   26504 command_runner.go:130] > # cni_default_network = ""
	I0907 00:03:50.829267   26504 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0907 00:03:50.829278   26504 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0907 00:03:50.829291   26504 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0907 00:03:50.829301   26504 command_runner.go:130] > # plugin_dirs = [
	I0907 00:03:50.829310   26504 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0907 00:03:50.829316   26504 command_runner.go:130] > # ]
	I0907 00:03:50.829329   26504 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0907 00:03:50.829339   26504 command_runner.go:130] > [crio.metrics]
	I0907 00:03:50.829347   26504 command_runner.go:130] > # Globally enable or disable metrics support.
	I0907 00:03:50.829357   26504 command_runner.go:130] > enable_metrics = true
	I0907 00:03:50.829365   26504 command_runner.go:130] > # Specify enabled metrics collectors.
	I0907 00:03:50.829375   26504 command_runner.go:130] > # Per default all metrics are enabled.
	I0907 00:03:50.829391   26504 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0907 00:03:50.829401   26504 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0907 00:03:50.829414   26504 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0907 00:03:50.829424   26504 command_runner.go:130] > # metrics_collectors = [
	I0907 00:03:50.829431   26504 command_runner.go:130] > # 	"operations",
	I0907 00:03:50.829443   26504 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0907 00:03:50.829454   26504 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0907 00:03:50.829467   26504 command_runner.go:130] > # 	"operations_errors",
	I0907 00:03:50.829478   26504 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0907 00:03:50.829488   26504 command_runner.go:130] > # 	"image_pulls_by_name",
	I0907 00:03:50.829494   26504 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0907 00:03:50.829501   26504 command_runner.go:130] > # 	"image_pulls_failures",
	I0907 00:03:50.829508   26504 command_runner.go:130] > # 	"image_pulls_successes",
	I0907 00:03:50.829518   26504 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0907 00:03:50.829528   26504 command_runner.go:130] > # 	"image_layer_reuse",
	I0907 00:03:50.829536   26504 command_runner.go:130] > # 	"containers_oom_total",
	I0907 00:03:50.829546   26504 command_runner.go:130] > # 	"containers_oom",
	I0907 00:03:50.829556   26504 command_runner.go:130] > # 	"processes_defunct",
	I0907 00:03:50.829565   26504 command_runner.go:130] > # 	"operations_total",
	I0907 00:03:50.829573   26504 command_runner.go:130] > # 	"operations_latency_seconds",
	I0907 00:03:50.829584   26504 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0907 00:03:50.829593   26504 command_runner.go:130] > # 	"operations_errors_total",
	I0907 00:03:50.829601   26504 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0907 00:03:50.829606   26504 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0907 00:03:50.829614   26504 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0907 00:03:50.829625   26504 command_runner.go:130] > # 	"image_pulls_success_total",
	I0907 00:03:50.829633   26504 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0907 00:03:50.829644   26504 command_runner.go:130] > # 	"containers_oom_count_total",
	I0907 00:03:50.829651   26504 command_runner.go:130] > # ]
	I0907 00:03:50.829660   26504 command_runner.go:130] > # The port on which the metrics server will listen.
	I0907 00:03:50.829669   26504 command_runner.go:130] > # metrics_port = 9090
	I0907 00:03:50.829678   26504 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0907 00:03:50.829688   26504 command_runner.go:130] > # metrics_socket = ""
	I0907 00:03:50.829695   26504 command_runner.go:130] > # The certificate for the secure metrics server.
	I0907 00:03:50.829706   26504 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0907 00:03:50.829720   26504 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0907 00:03:50.829733   26504 command_runner.go:130] > # certificate on any modification event.
	I0907 00:03:50.829739   26504 command_runner.go:130] > # metrics_cert = ""
	I0907 00:03:50.829752   26504 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0907 00:03:50.829763   26504 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0907 00:03:50.829773   26504 command_runner.go:130] > # metrics_key = ""
	I0907 00:03:50.829783   26504 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0907 00:03:50.829792   26504 command_runner.go:130] > [crio.tracing]
	I0907 00:03:50.829800   26504 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0907 00:03:50.829806   26504 command_runner.go:130] > # enable_tracing = false
	I0907 00:03:50.829815   26504 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0907 00:03:50.829826   26504 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0907 00:03:50.829836   26504 command_runner.go:130] > # Number of samples to collect per million spans.
	I0907 00:03:50.829847   26504 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0907 00:03:50.829858   26504 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0907 00:03:50.829867   26504 command_runner.go:130] > [crio.stats]
	I0907 00:03:50.829877   26504 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0907 00:03:50.829891   26504 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0907 00:03:50.829899   26504 command_runner.go:130] > # stats_collection_period = 0
	I0907 00:03:50.830085   26504 command_runner.go:130] ! time="2023-09-07 00:03:50.791662802Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0907 00:03:50.830109   26504 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0907 00:03:50.830212   26504 cni.go:84] Creating CNI manager for ""
	I0907 00:03:50.830231   26504 cni.go:136] 1 nodes found, recommending kindnet
	I0907 00:03:50.830251   26504 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:03:50.830279   26504 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-816061 NodeName:multinode-816061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:03:50.830436   26504 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-816061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:03:50.830550   26504 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-816061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:03:50.830618   26504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:03:50.840294   26504 command_runner.go:130] > kubeadm
	I0907 00:03:50.840314   26504 command_runner.go:130] > kubectl
	I0907 00:03:50.840318   26504 command_runner.go:130] > kubelet
	I0907 00:03:50.840340   26504 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:03:50.840404   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:03:50.849406   26504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0907 00:03:50.865884   26504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:03:50.882333   26504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0907 00:03:50.899257   26504 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0907 00:03:50.903484   26504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:03:50.916568   26504 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061 for IP: 192.168.39.212
	I0907 00:03:50.916615   26504 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:50.916794   26504 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:03:50.916851   26504 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:03:50.916906   26504 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key
	I0907 00:03:50.916921   26504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt with IP's: []
	I0907 00:03:50.978238   26504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt ...
	I0907 00:03:50.978266   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt: {Name:mk7d97f3e71b3eee4e9d46fffebd387b88a91cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:50.978418   26504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key ...
	I0907 00:03:50.978427   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key: {Name:mk6e2aa8de1c5bf053715d6da8fcf1d78d730094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:50.978498   26504 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key.543da273
	I0907 00:03:50.978518   26504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt.543da273 with IP's: [192.168.39.212 10.96.0.1 127.0.0.1 10.0.0.1]
	I0907 00:03:51.024698   26504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt.543da273 ...
	I0907 00:03:51.024722   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt.543da273: {Name:mke63e2ca0a0dde637d5eb05e87ec017d3bdc4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:51.024882   26504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key.543da273 ...
	I0907 00:03:51.024893   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key.543da273: {Name:mke344e4f1cc0a84773c79b301e2b3e90bffc78d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:51.024957   26504 certs.go:337] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt.543da273 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt
	I0907 00:03:51.025035   26504 certs.go:341] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key.543da273 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key
	I0907 00:03:51.025087   26504 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key
	I0907 00:03:51.025102   26504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt with IP's: []
	I0907 00:03:51.171737   26504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt ...
	I0907 00:03:51.171768   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt: {Name:mkdf9b65e45c2d22f72fe9054be70f8548a8110d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:51.171932   26504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key ...
	I0907 00:03:51.171945   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key: {Name:mkcde1b873d2303e4e8e837152a4181b6bf7b5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:03:51.172035   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0907 00:03:51.172053   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0907 00:03:51.172068   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0907 00:03:51.172089   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0907 00:03:51.172108   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0907 00:03:51.172122   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0907 00:03:51.172135   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0907 00:03:51.172161   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0907 00:03:51.172229   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:03:51.172270   26504 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:03:51.172286   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:03:51.172320   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:03:51.172352   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:03:51.172383   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:03:51.172459   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:03:51.172500   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0907 00:03:51.172521   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:03:51.172540   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0907 00:03:51.173069   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:03:51.198536   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:03:51.222934   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:03:51.247767   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:03:51.272999   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:03:51.297569   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:03:51.321703   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:03:51.346128   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:03:51.371770   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:03:51.397891   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:03:51.421435   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:03:51.445281   26504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:03:51.461134   26504 ssh_runner.go:195] Run: openssl version
	I0907 00:03:51.466767   26504 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0907 00:03:51.466848   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:03:51.476897   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:03:51.481490   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:03:51.481639   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:03:51.481697   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:03:51.486870   26504 command_runner.go:130] > 3ec20f2e
	I0907 00:03:51.487030   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:03:51.496717   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:03:51.506484   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:03:51.511246   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:03:51.511363   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:03:51.511417   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:03:51.516923   26504 command_runner.go:130] > b5213941
	I0907 00:03:51.517001   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:03:51.527227   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:03:51.537041   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:03:51.541668   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:03:51.541782   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:03:51.541840   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:03:51.547475   26504 command_runner.go:130] > 51391683
	I0907 00:03:51.547541   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:03:51.557777   26504 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:03:51.562723   26504 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:03:51.562794   26504 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:03:51.562837   26504 kubeadm.go:404] StartCluster: {Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:03:51.562914   26504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:03:51.562963   26504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:03:51.595911   26504 cri.go:89] found id: ""
	I0907 00:03:51.595976   26504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:03:51.605167   26504 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0907 00:03:51.605200   26504 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0907 00:03:51.605209   26504 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0907 00:03:51.605284   26504 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:03:51.614541   26504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:03:51.623590   26504 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0907 00:03:51.623611   26504 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0907 00:03:51.623618   26504 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0907 00:03:51.623630   26504 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:03:51.623663   26504 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:03:51.623698   26504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:03:51.730560   26504 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:03:51.730584   26504 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0907 00:03:51.730650   26504 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:03:51.730655   26504 command_runner.go:130] > [preflight] Running pre-flight checks
	I0907 00:03:51.957079   26504 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:03:51.957104   26504 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:03:51.957261   26504 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:03:51.957285   26504 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:03:51.957400   26504 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:03:51.957420   26504 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:03:52.134821   26504 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:03:52.134924   26504 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:03:52.285210   26504 out.go:204]   - Generating certificates and keys ...
	I0907 00:03:52.285351   26504 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0907 00:03:52.285365   26504 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:03:52.285418   26504 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0907 00:03:52.285425   26504 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:03:52.303530   26504 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 00:03:52.303557   26504 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 00:03:52.601298   26504 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0907 00:03:52.601323   26504 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0907 00:03:52.837792   26504 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0907 00:03:52.837820   26504 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0907 00:03:53.017661   26504 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0907 00:03:53.017686   26504 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0907 00:03:53.202970   26504 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0907 00:03:53.202998   26504 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0907 00:03:53.203249   26504 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-816061] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0907 00:03:53.203270   26504 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-816061] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0907 00:03:53.346088   26504 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0907 00:03:53.346120   26504 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0907 00:03:53.346263   26504 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-816061] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0907 00:03:53.346275   26504 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-816061] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0907 00:03:53.589784   26504 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 00:03:53.589809   26504 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 00:03:53.851560   26504 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 00:03:53.851587   26504 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 00:03:54.037261   26504 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0907 00:03:54.037286   26504 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0907 00:03:54.037502   26504 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:03:54.037539   26504 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:03:54.144937   26504 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:03:54.144966   26504 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:03:54.389168   26504 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:03:54.389194   26504 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:03:54.477906   26504 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:03:54.477932   26504 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:03:54.646635   26504 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:03:54.646664   26504 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:03:54.647432   26504 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:03:54.647451   26504 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:03:54.650629   26504 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:03:54.652703   26504 out.go:204]   - Booting up control plane ...
	I0907 00:03:54.650662   26504 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:03:54.652838   26504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:03:54.652853   26504 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:03:54.652912   26504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:03:54.652920   26504 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:03:54.653687   26504 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:03:54.653707   26504 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:03:54.671484   26504 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:03:54.671515   26504 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:03:54.672070   26504 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:03:54.672096   26504 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:03:54.672204   26504 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:03:54.672218   26504 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0907 00:03:54.810605   26504 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:03:54.810632   26504 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:04:02.810742   26504 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004794 seconds
	I0907 00:04:02.810757   26504 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004794 seconds
	I0907 00:04:02.810911   26504 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:04:02.810923   26504 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:04:02.829124   26504 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:04:02.829152   26504 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:04:03.363588   26504 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:04:03.363611   26504 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:04:03.363777   26504 kubeadm.go:322] [mark-control-plane] Marking the node multinode-816061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:04:03.363788   26504 command_runner.go:130] > [mark-control-plane] Marking the node multinode-816061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:04:03.877390   26504 kubeadm.go:322] [bootstrap-token] Using token: 0ouaih.dr1n7ntung7lsub8
	I0907 00:04:03.878845   26504 out.go:204]   - Configuring RBAC rules ...
	I0907 00:04:03.877431   26504 command_runner.go:130] > [bootstrap-token] Using token: 0ouaih.dr1n7ntung7lsub8
	I0907 00:04:03.878977   26504 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:04:03.878993   26504 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:04:03.888187   26504 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:04:03.888205   26504 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:04:03.906731   26504 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:04:03.906757   26504 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:04:03.910934   26504 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:04:03.910953   26504 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:04:03.915850   26504 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:04:03.915865   26504 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:04:03.919996   26504 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:04:03.920022   26504 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:04:03.942883   26504 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:04:03.942919   26504 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:04:04.168292   26504 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:04:04.168324   26504 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0907 00:04:04.293104   26504 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:04:04.293129   26504 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0907 00:04:04.294175   26504 kubeadm.go:322] 
	I0907 00:04:04.294268   26504 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:04:04.294284   26504 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0907 00:04:04.294297   26504 kubeadm.go:322] 
	I0907 00:04:04.294383   26504 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:04:04.294390   26504 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0907 00:04:04.294394   26504 kubeadm.go:322] 
	I0907 00:04:04.294415   26504 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:04:04.294421   26504 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0907 00:04:04.294482   26504 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:04:04.294518   26504 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:04:04.294588   26504 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:04:04.294597   26504 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:04:04.294603   26504 kubeadm.go:322] 
	I0907 00:04:04.294678   26504 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:04:04.294687   26504 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0907 00:04:04.294693   26504 kubeadm.go:322] 
	I0907 00:04:04.294804   26504 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:04:04.294815   26504 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:04:04.294820   26504 kubeadm.go:322] 
	I0907 00:04:04.294891   26504 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:04:04.294901   26504 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0907 00:04:04.295002   26504 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:04:04.295013   26504 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:04:04.295113   26504 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:04:04.295124   26504 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:04:04.295129   26504 kubeadm.go:322] 
	I0907 00:04:04.295250   26504 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:04:04.295261   26504 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:04:04.295486   26504 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:04:04.295507   26504 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0907 00:04:04.295513   26504 kubeadm.go:322] 
	I0907 00:04:04.295633   26504 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0ouaih.dr1n7ntung7lsub8 \
	I0907 00:04:04.295644   26504 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0ouaih.dr1n7ntung7lsub8 \
	I0907 00:04:04.295773   26504 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:04:04.295793   26504 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:04:04.295821   26504 kubeadm.go:322] 	--control-plane 
	I0907 00:04:04.295830   26504 command_runner.go:130] > 	--control-plane 
	I0907 00:04:04.295836   26504 kubeadm.go:322] 
	I0907 00:04:04.295957   26504 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:04:04.295967   26504 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:04:04.295977   26504 kubeadm.go:322] 
	I0907 00:04:04.296086   26504 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0ouaih.dr1n7ntung7lsub8 \
	I0907 00:04:04.296096   26504 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0ouaih.dr1n7ntung7lsub8 \
	I0907 00:04:04.296248   26504 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:04:04.296271   26504 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:04:04.296412   26504 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:04:04.296438   26504 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:04:04.296446   26504 cni.go:84] Creating CNI manager for ""
	I0907 00:04:04.296458   26504 cni.go:136] 1 nodes found, recommending kindnet
	I0907 00:04:04.299056   26504 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0907 00:04:04.300437   26504 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:04:04.313295   26504 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0907 00:04:04.313314   26504 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0907 00:04:04.313321   26504 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0907 00:04:04.313327   26504 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:04:04.313336   26504 command_runner.go:130] > Access: 2023-09-07 00:03:32.020590168 +0000
	I0907 00:04:04.313342   26504 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0907 00:04:04.313347   26504 command_runner.go:130] > Change: 2023-09-07 00:03:30.122590168 +0000
	I0907 00:04:04.313351   26504 command_runner.go:130] >  Birth: -
	I0907 00:04:04.313392   26504 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 00:04:04.313402   26504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 00:04:04.346503   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:04:05.394984   26504 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0907 00:04:05.395008   26504 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0907 00:04:05.395019   26504 command_runner.go:130] > serviceaccount/kindnet created
	I0907 00:04:05.395025   26504 command_runner.go:130] > daemonset.apps/kindnet created
	I0907 00:04:05.395100   26504 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.048556145s)
	I0907 00:04:05.395149   26504 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:04:05.395235   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:05.395242   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=multinode-816061 minikube.k8s.io/updated_at=2023_09_07T00_04_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:05.434808   26504 command_runner.go:130] > -16
	I0907 00:04:05.434848   26504 ops.go:34] apiserver oom_adj: -16
	I0907 00:04:05.614230   26504 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0907 00:04:05.614330   26504 command_runner.go:130] > node/multinode-816061 labeled
	I0907 00:04:05.614336   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:05.700283   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:05.700368   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:05.783060   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:06.283882   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:06.369981   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:06.783535   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:06.863822   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:07.283374   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:07.370323   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:07.783913   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:07.875923   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:08.284161   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:08.368367   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:08.783215   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:08.872060   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:09.283556   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:09.375630   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:09.783371   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:09.864736   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:10.283293   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:10.380453   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:10.783621   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:10.867197   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:11.283583   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:11.366124   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:11.783436   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:11.878295   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:12.283304   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:12.365805   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:12.783322   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:12.891256   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:13.283812   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:13.377930   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:13.784080   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:13.896895   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:14.283407   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:14.375714   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:14.783299   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:14.875013   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:15.283942   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:15.378739   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:15.783535   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:15.880258   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:16.283466   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:16.382187   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:16.783870   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:16.882918   26504 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0907 00:04:17.283367   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:04:17.482625   26504 command_runner.go:130] > NAME      SECRETS   AGE
	I0907 00:04:17.482643   26504 command_runner.go:130] > default   0         0s
	I0907 00:04:17.484045   26504 kubeadm.go:1081] duration metric: took 12.088881343s to wait for elevateKubeSystemPrivileges.
	I0907 00:04:17.484077   26504 kubeadm.go:406] StartCluster complete in 25.921243396s
	I0907 00:04:17.484099   26504 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:04:17.484177   26504 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:04:17.485495   26504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:04:17.486296   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:04:17.486365   26504 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:04:17.486426   26504 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:04:17.486653   26504 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:04:17.486666   26504 addons.go:69] Setting storage-provisioner=true in profile "multinode-816061"
	I0907 00:04:17.486685   26504 addons.go:231] Setting addon storage-provisioner=true in "multinode-816061"
	I0907 00:04:17.486696   26504 addons.go:69] Setting default-storageclass=true in profile "multinode-816061"
	I0907 00:04:17.486728   26504 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-816061"
	I0907 00:04:17.486793   26504 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:04:17.486860   26504 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:04:17.487765   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:17.487766   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:17.487812   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:17.487862   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:17.488291   26504 cert_rotation.go:137] Starting client certificate rotation controller
	I0907 00:04:17.488640   26504 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:04:17.488656   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.488669   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.488679   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.503573   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44597
	I0907 00:04:17.503614   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0907 00:04:17.503997   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:17.504111   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:17.504442   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:17.504467   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:17.504656   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:17.504677   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:17.504780   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:17.505034   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:17.505189   26504 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:04:17.505350   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:17.505398   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:17.507486   26504 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:04:17.507789   26504 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:04:17.508190   26504 round_trippers.go:463] GET https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses
	I0907 00:04:17.508206   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.508218   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.508228   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.517414   26504 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0907 00:04:17.517442   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.517454   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:17.517462   26504 round_trippers.go:580]     Audit-Id: b8a29220-efa9-42ca-9315-9916c581ad10
	I0907 00:04:17.517471   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.517479   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.517485   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.517491   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.517496   26504 round_trippers.go:580]     Content-Length: 291
	I0907 00:04:17.517522   26504 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"334","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0907 00:04:17.517892   26504 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"334","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0907 00:04:17.517939   26504 round_trippers.go:463] PUT https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:04:17.517945   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.517951   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.517961   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.517967   26504 round_trippers.go:473]     Content-Type: application/json
	I0907 00:04:17.520148   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0907 00:04:17.520504   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:17.520938   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:17.520962   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:17.521268   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:17.521449   26504 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:04:17.522761   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:04:17.524870   26504 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:04:17.526448   26504 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:04:17.526469   26504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:04:17.526489   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:04:17.529598   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:04:17.530037   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:04:17.530065   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:04:17.530215   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:04:17.530387   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:04:17.530572   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:04:17.530725   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:04:17.541771   26504 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0907 00:04:17.541796   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.541807   26504 round_trippers.go:580]     Audit-Id: 3f9dad64-e656-4ded-89fe-56077acf2e29
	I0907 00:04:17.541816   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.541821   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.541827   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.541832   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.541841   26504 round_trippers.go:580]     Content-Length: 109
	I0907 00:04:17.541847   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:17.544663   26504 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"371"},"items":[]}
	I0907 00:04:17.544941   26504 addons.go:231] Setting addon default-storageclass=true in "multinode-816061"
	I0907 00:04:17.544981   26504 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:04:17.545410   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:17.545489   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:17.545497   26504 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0907 00:04:17.545511   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.545525   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.545538   26504 round_trippers.go:580]     Content-Length: 291
	I0907 00:04:17.545547   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:17.545558   26504 round_trippers.go:580]     Audit-Id: 801606c9-f948-42a2-8bab-cee99ab14e5c
	I0907 00:04:17.545566   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.545596   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.545612   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.545636   26504 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"372","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0907 00:04:17.545754   26504 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:04:17.545763   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.545770   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.545776   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.549852   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:04:17.549870   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.549878   26504 round_trippers.go:580]     Content-Length: 291
	I0907 00:04:17.549890   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:17.549895   26504 round_trippers.go:580]     Audit-Id: 161ca0b9-479b-4214-90be-a57dd9344767
	I0907 00:04:17.549901   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.549910   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.549915   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.549921   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.549937   26504 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"372","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0907 00:04:17.550009   26504 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-816061" context rescaled to 1 replicas
	I0907 00:04:17.550034   26504 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:04:17.551680   26504 out.go:177] * Verifying Kubernetes components...
	I0907 00:04:17.553161   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:04:17.560495   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0907 00:04:17.560891   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:17.561419   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:17.561442   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:17.561784   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:17.562384   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:17.562435   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:17.577410   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I0907 00:04:17.577859   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:17.578413   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:17.578434   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:17.578790   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:17.578985   26504 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:04:17.580848   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:04:17.581108   26504 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:04:17.581125   26504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:04:17.581140   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:04:17.583874   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:04:17.584286   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:04:17.584318   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:04:17.584545   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:04:17.584748   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:04:17.584909   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:04:17.585059   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:04:17.720240   26504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:04:17.779221   26504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:04:17.812729   26504 command_runner.go:130] > apiVersion: v1
	I0907 00:04:17.812758   26504 command_runner.go:130] > data:
	I0907 00:04:17.812765   26504 command_runner.go:130] >   Corefile: |
	I0907 00:04:17.812770   26504 command_runner.go:130] >     .:53 {
	I0907 00:04:17.812775   26504 command_runner.go:130] >         errors
	I0907 00:04:17.812783   26504 command_runner.go:130] >         health {
	I0907 00:04:17.812790   26504 command_runner.go:130] >            lameduck 5s
	I0907 00:04:17.812795   26504 command_runner.go:130] >         }
	I0907 00:04:17.812816   26504 command_runner.go:130] >         ready
	I0907 00:04:17.812826   26504 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0907 00:04:17.812833   26504 command_runner.go:130] >            pods insecure
	I0907 00:04:17.812841   26504 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0907 00:04:17.812852   26504 command_runner.go:130] >            ttl 30
	I0907 00:04:17.812859   26504 command_runner.go:130] >         }
	I0907 00:04:17.812866   26504 command_runner.go:130] >         prometheus :9153
	I0907 00:04:17.812875   26504 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0907 00:04:17.812887   26504 command_runner.go:130] >            max_concurrent 1000
	I0907 00:04:17.812894   26504 command_runner.go:130] >         }
	I0907 00:04:17.812901   26504 command_runner.go:130] >         cache 30
	I0907 00:04:17.812908   26504 command_runner.go:130] >         loop
	I0907 00:04:17.812915   26504 command_runner.go:130] >         reload
	I0907 00:04:17.812922   26504 command_runner.go:130] >         loadbalance
	I0907 00:04:17.812928   26504 command_runner.go:130] >     }
	I0907 00:04:17.812935   26504 command_runner.go:130] > kind: ConfigMap
	I0907 00:04:17.812941   26504 command_runner.go:130] > metadata:
	I0907 00:04:17.812953   26504 command_runner.go:130] >   creationTimestamp: "2023-09-07T00:04:04Z"
	I0907 00:04:17.812964   26504 command_runner.go:130] >   name: coredns
	I0907 00:04:17.812972   26504 command_runner.go:130] >   namespace: kube-system
	I0907 00:04:17.812983   26504 command_runner.go:130] >   resourceVersion: "256"
	I0907 00:04:17.812991   26504 command_runner.go:130] >   uid: ecb72cf6-2a6c-419e-8770-8a9176c286a3
	I0907 00:04:17.814388   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:04:17.814595   26504 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:04:17.814875   26504 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:04:17.815143   26504 node_ready.go:35] waiting up to 6m0s for node "multinode-816061" to be "Ready" ...
	I0907 00:04:17.815231   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:17.815242   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.815254   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.815268   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.934241   26504 round_trippers.go:574] Response Status: 200 OK in 118 milliseconds
	I0907 00:04:17.934266   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.934276   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.934285   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:17.934294   26504 round_trippers.go:580]     Audit-Id: a1072647-2451-4c4e-8c05-07a436280c37
	I0907 00:04:17.934301   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.934307   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.934314   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.946901   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:17.947520   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:17.947536   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:17.947546   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:17.947555   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:17.964949   26504 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0907 00:04:17.964985   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:17.964995   26504 round_trippers.go:580]     Audit-Id: 6a435922-fd6a-44a4-9f1d-473652b9adc1
	I0907 00:04:17.965039   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:17.965048   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:17.965058   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:17.965068   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:17.965077   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:17 GMT
	I0907 00:04:18.013781   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:18.515213   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:18.515233   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:18.515248   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:18.515257   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:18.517704   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:18.517730   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:18.517741   26504 round_trippers.go:580]     Audit-Id: 667c6b7f-8ce9-4405-a1db-8eb78c81787b
	I0907 00:04:18.517751   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:18.517761   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:18.517770   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:18.517779   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:18.517787   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:18 GMT
	I0907 00:04:18.517969   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:18.624440   26504 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0907 00:04:18.633558   26504 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0907 00:04:18.647008   26504 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0907 00:04:18.658736   26504 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0907 00:04:18.668381   26504 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0907 00:04:18.682791   26504 command_runner.go:130] > pod/storage-provisioner created
	I0907 00:04:18.685542   26504 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0907 00:04:18.685593   26504 main.go:141] libmachine: Making call to close driver server
	I0907 00:04:18.685607   26504 main.go:141] libmachine: (multinode-816061) Calling .Close
	I0907 00:04:18.685632   26504 command_runner.go:130] > configmap/coredns replaced
	I0907 00:04:18.685688   26504 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0907 00:04:18.685752   26504 main.go:141] libmachine: Making call to close driver server
	I0907 00:04:18.685784   26504 main.go:141] libmachine: (multinode-816061) Calling .Close
	I0907 00:04:18.685872   26504 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:04:18.685888   26504 main.go:141] libmachine: (multinode-816061) DBG | Closing plugin on server side
	I0907 00:04:18.685891   26504 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:04:18.685903   26504 main.go:141] libmachine: Making call to close driver server
	I0907 00:04:18.685913   26504 main.go:141] libmachine: (multinode-816061) Calling .Close
	I0907 00:04:18.686009   26504 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:04:18.686026   26504 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:04:18.686035   26504 main.go:141] libmachine: Making call to close driver server
	I0907 00:04:18.686049   26504 main.go:141] libmachine: (multinode-816061) Calling .Close
	I0907 00:04:18.686179   26504 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:04:18.686190   26504 main.go:141] libmachine: (multinode-816061) DBG | Closing plugin on server side
	I0907 00:04:18.686194   26504 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:04:18.686254   26504 main.go:141] libmachine: Making call to close driver server
	I0907 00:04:18.686265   26504 main.go:141] libmachine: (multinode-816061) Calling .Close
	I0907 00:04:18.686341   26504 main.go:141] libmachine: (multinode-816061) DBG | Closing plugin on server side
	I0907 00:04:18.686440   26504 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:04:18.686449   26504 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:04:18.686528   26504 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:04:18.686625   26504 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:04:18.686653   26504 main.go:141] libmachine: (multinode-816061) DBG | Closing plugin on server side
	I0907 00:04:18.688357   26504 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0907 00:04:18.689651   26504 addons.go:502] enable addons completed in 1.203276274s: enabled=[storage-provisioner default-storageclass]
	I0907 00:04:19.014966   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:19.015007   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:19.015020   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:19.015031   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:19.017685   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:19.017711   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:19.017722   26504 round_trippers.go:580]     Audit-Id: a7b3581e-f5a8-4820-b9c5-2300bf33a39a
	I0907 00:04:19.017731   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:19.017741   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:19.017749   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:19.017761   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:19.017771   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:19 GMT
	I0907 00:04:19.018188   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:19.514906   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:19.514943   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:19.514956   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:19.514965   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:19.517773   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:19.517800   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:19.517811   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:19 GMT
	I0907 00:04:19.517820   26504 round_trippers.go:580]     Audit-Id: 210ab351-db77-46af-bd3a-c6a0aa72196c
	I0907 00:04:19.517832   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:19.517855   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:19.517867   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:19.517876   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:19.518076   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:20.014720   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:20.014744   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:20.014752   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:20.014759   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:20.017812   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:20.017839   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:20.017849   26504 round_trippers.go:580]     Audit-Id: 3f2a1621-3676-4887-a27a-ec2c763de0c8
	I0907 00:04:20.017857   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:20.017866   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:20.017873   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:20.017884   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:20.017893   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:20 GMT
	I0907 00:04:20.018361   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:20.018820   26504 node_ready.go:58] node "multinode-816061" has status "Ready":"False"
	I0907 00:04:20.515086   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:20.515110   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:20.515118   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:20.515124   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:20.517733   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:20.517760   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:20.517771   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:20.517780   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:20.517788   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:20.517796   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:20.517803   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:20 GMT
	I0907 00:04:20.517811   26504 round_trippers.go:580]     Audit-Id: 1636f2fb-c3f6-4baa-acb2-0d85bbfa61cc
	I0907 00:04:20.518257   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:21.014978   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:21.015001   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:21.015010   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:21.015016   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:21.017926   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:21.017947   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:21.017954   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:21.017960   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:21.017965   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:21.017971   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:21.017976   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:21 GMT
	I0907 00:04:21.017981   26504 round_trippers.go:580]     Audit-Id: b4d6489a-8338-479a-8ea0-848586e3f7e1
	I0907 00:04:21.018477   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:21.515243   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:21.515265   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:21.515274   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:21.515280   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:21.517961   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:21.517991   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:21.518001   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:21.518010   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:21.518018   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:21 GMT
	I0907 00:04:21.518027   26504 round_trippers.go:580]     Audit-Id: cc31428e-be0c-4544-9903-4fa9a9ea2b88
	I0907 00:04:21.518036   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:21.518045   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:21.518241   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:22.014419   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:22.014443   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:22.014454   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:22.014465   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:22.017427   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:22.017449   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:22.017458   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:22 GMT
	I0907 00:04:22.017466   26504 round_trippers.go:580]     Audit-Id: d56e957c-f2d7-40d1-aba2-183b9a4e1e22
	I0907 00:04:22.017474   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:22.017482   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:22.017488   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:22.017496   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:22.017983   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:22.514915   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:22.514943   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:22.514954   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:22.514962   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:22.520475   26504 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:04:22.520519   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:22.520527   26504 round_trippers.go:580]     Audit-Id: 964b935a-daee-4d1f-97e8-5d5d8ea44cc0
	I0907 00:04:22.520534   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:22.520540   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:22.520549   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:22.520563   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:22.520573   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:22 GMT
	I0907 00:04:22.520783   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:22.521131   26504 node_ready.go:58] node "multinode-816061" has status "Ready":"False"
	I0907 00:04:23.014947   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:23.014985   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:23.014997   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:23.015019   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:23.018459   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:23.018481   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:23.018491   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:23.018497   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:23.018503   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:23 GMT
	I0907 00:04:23.018512   26504 round_trippers.go:580]     Audit-Id: 017aa924-b5cc-412e-9b59-0d9174d64e8d
	I0907 00:04:23.018517   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:23.018523   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:23.019087   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:23.514737   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:23.514762   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:23.514771   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:23.514793   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:23.517653   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:23.517675   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:23.517685   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:23.517692   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:23 GMT
	I0907 00:04:23.517701   26504 round_trippers.go:580]     Audit-Id: cf8a242e-683c-489b-8b21-61dc0dc9be66
	I0907 00:04:23.517709   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:23.517718   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:23.517723   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:23.517884   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"375","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0907 00:04:24.015172   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:24.015190   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.015198   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.015204   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.017876   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:24.017892   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.017899   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.017905   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.017910   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.017915   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.017921   26504 round_trippers.go:580]     Audit-Id: d445ad35-f2a6-4a10-9593-ccb429a08a69
	I0907 00:04:24.017926   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.018200   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:24.018531   26504 node_ready.go:49] node "multinode-816061" has status "Ready":"True"
	I0907 00:04:24.018546   26504 node_ready.go:38] duration metric: took 6.203386692s waiting for node "multinode-816061" to be "Ready" ...
	I0907 00:04:24.018555   26504 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:04:24.018616   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:04:24.018626   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.018632   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.018638   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.022288   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:24.022310   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.022320   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.022328   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.022337   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.022346   26504 round_trippers.go:580]     Audit-Id: 63a242f6-9fb9-498c-b114-85a1552fefbf
	I0907 00:04:24.022353   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.022362   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.026151   26504 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"431","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54819 chars]
	I0907 00:04:24.029437   26504 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:24.029505   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:04:24.029521   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.029532   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.029544   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.033237   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:24.033257   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.033265   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.033273   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.033281   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.033293   26504 round_trippers.go:580]     Audit-Id: 72b194a8-4442-4d23-b707-396b9b08cf93
	I0907 00:04:24.033302   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.033314   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.033486   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"431","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0907 00:04:24.033878   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:24.033890   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.033897   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.033903   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.036358   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:24.036378   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.036389   26504 round_trippers.go:580]     Audit-Id: 877842ca-af1c-4a0e-8acd-0b997543c1ab
	I0907 00:04:24.036398   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.036407   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.036416   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.036425   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.036437   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.036807   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:24.037123   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:04:24.037133   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.037141   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.037147   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.040812   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:24.040830   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.040840   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.040848   26504 round_trippers.go:580]     Audit-Id: 9ed24370-1b26-465d-9a4b-f55098929411
	I0907 00:04:24.040855   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.040864   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.040873   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.040881   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.041062   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"431","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0907 00:04:24.041416   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:24.041431   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.041443   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.041454   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.044063   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:24.044084   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.044094   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.044103   26504 round_trippers.go:580]     Audit-Id: f064e3ed-1e79-4b95-b639-57b8d4bd2b2c
	I0907 00:04:24.044112   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.044121   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.044130   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.044146   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.044271   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:24.545056   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:04:24.545079   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.545087   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.545097   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.548232   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:24.548256   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.548267   26504 round_trippers.go:580]     Audit-Id: b40c72ca-51a5-43cc-989e-eeb4263acf48
	I0907 00:04:24.548279   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.548289   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.548301   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.548317   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.548326   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.549017   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"431","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0907 00:04:24.549424   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:24.549435   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:24.549446   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:24.549457   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:24.551796   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:24.551821   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:24.551831   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:24.551843   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:24 GMT
	I0907 00:04:24.551852   26504 round_trippers.go:580]     Audit-Id: c2bc36ff-26e0-4553-8046-cdb8935f6f6a
	I0907 00:04:24.551860   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:24.551873   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:24.551881   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:24.552254   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.044876   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:04:25.044909   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.044918   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.044925   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.048042   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:25.048068   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.048078   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.048086   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.048094   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.048103   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.048111   26504 round_trippers.go:580]     Audit-Id: 1801c54f-b87a-41e0-b2e2-ef06be1c30d1
	I0907 00:04:25.048121   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.048247   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"431","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0907 00:04:25.048693   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.048705   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.048712   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.048718   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.051041   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.051061   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.051070   26504 round_trippers.go:580]     Audit-Id: 5bbf324b-4eef-43f4-b9db-aa5801a44165
	I0907 00:04:25.051079   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.051087   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.051105   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.051114   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.051126   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.051330   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.545004   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:04:25.545033   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.545045   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.545067   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.548096   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:25.548117   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.548127   26504 round_trippers.go:580]     Audit-Id: b710f143-9ca8-43d7-815e-fc8cac3d13fc
	I0907 00:04:25.548136   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.548144   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.548153   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.548163   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.548172   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.548279   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"446","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0907 00:04:25.548695   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.548708   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.548715   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.548738   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.551104   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.551125   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.551135   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.551143   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.551152   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.551161   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.551170   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.551179   26504 round_trippers.go:580]     Audit-Id: 6b08f48a-65de-4372-88e4-f15ac716f13a
	I0907 00:04:25.551303   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.551688   26504 pod_ready.go:92] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:25.551707   26504 pod_ready.go:81] duration metric: took 1.52225066s waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.551719   26504 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.551780   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:04:25.551790   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.551801   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.551814   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.553991   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.554005   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.554012   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.554018   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.554027   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.554035   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.554043   26504 round_trippers.go:580]     Audit-Id: a5ae1f97-0a09-434a-9096-b1a178d86e3c
	I0907 00:04:25.554055   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.554273   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"434","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0907 00:04:25.554590   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.554600   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.554607   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.554613   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.557275   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.557293   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.557300   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.557306   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.557311   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.557316   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.557322   26504 round_trippers.go:580]     Audit-Id: 5ed16f76-d4d8-467b-939d-0230e475b591
	I0907 00:04:25.557327   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.558284   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.558534   26504 pod_ready.go:92] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:25.558545   26504 pod_ready.go:81] duration metric: took 6.81993ms waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.558555   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.558610   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:04:25.558617   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.558624   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.558630   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.560679   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.560697   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.560707   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.560714   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.560724   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.560735   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.560745   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.560760   26504 round_trippers.go:580]     Audit-Id: 6eb7ed8c-bf92-4052-bf6b-d84456270fad
	I0907 00:04:25.560905   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"435","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0907 00:04:25.561242   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.561252   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.561259   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.561264   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.562831   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:04:25.562844   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.562852   26504 round_trippers.go:580]     Audit-Id: 1092743e-039b-4420-9e9d-453efdedc1ba
	I0907 00:04:25.562858   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.562863   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.562871   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.562884   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.562896   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.562999   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.563241   26504 pod_ready.go:92] pod "kube-apiserver-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:25.563251   26504 pod_ready.go:81] duration metric: took 4.690904ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.563258   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.563293   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:04:25.563300   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.563306   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.563312   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.565119   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:04:25.565131   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.565137   26504 round_trippers.go:580]     Audit-Id: 9c22520d-6768-4f0f-9e66-69c1698df68c
	I0907 00:04:25.565142   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.565148   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.565153   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.565158   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.565167   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.565306   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"433","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0907 00:04:25.615893   26504 request.go:629] Waited for 50.232458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.615952   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:25.615957   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.615964   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.615970   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.618585   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.618603   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.618611   26504 round_trippers.go:580]     Audit-Id: 45842a27-f189-4546-a4ba-59b78d8b9e69
	I0907 00:04:25.618616   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.618621   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.618630   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.618639   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.618648   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.618801   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:25.619138   26504 pod_ready.go:92] pod "kube-controller-manager-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:25.619155   26504 pod_ready.go:81] duration metric: took 55.891647ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.619165   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:25.815651   26504 request.go:629] Waited for 196.400995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:04:25.815718   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:04:25.815722   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:25.815730   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:25.815736   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:25.818638   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:25.818657   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:25.818665   26504 round_trippers.go:580]     Audit-Id: ecc8cdf2-e556-4520-bc1c-b5339a3fe294
	I0907 00:04:25.818670   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:25.818676   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:25.818681   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:25.818686   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:25.818694   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:25 GMT
	I0907 00:04:25.819050   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"414","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:04:26.015802   26504 request.go:629] Waited for 196.355715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:26.015870   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:26.015875   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.015882   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.015889   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.018686   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:26.018701   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.018708   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.018713   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.018719   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.018724   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.018729   26504 round_trippers.go:580]     Audit-Id: be508334-c373-4a11-a25c-dcc105f97da6
	I0907 00:04:26.018735   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.018904   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:26.019195   26504 pod_ready.go:92] pod "kube-proxy-tbzlv" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:26.019209   26504 pod_ready.go:81] duration metric: took 400.039439ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:26.019217   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:26.215691   26504 request.go:629] Waited for 196.395546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:04:26.215742   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:04:26.215746   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.215754   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.215761   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.218600   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:26.218620   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.218627   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.218633   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.218638   26504 round_trippers.go:580]     Audit-Id: 56decb85-cae9-40ff-acf7-43321606ab36
	I0907 00:04:26.218644   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.218649   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.218655   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.219000   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"432","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0907 00:04:26.415762   26504 request.go:629] Waited for 196.401897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:26.415819   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:04:26.415826   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.415834   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.415840   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.418678   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:26.418697   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.418706   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.418713   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.418720   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.418728   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.418736   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.418745   26504 round_trippers.go:580]     Audit-Id: e423d0c0-9e4e-4152-bd46-8ef2659036c1
	I0907 00:04:26.419155   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:04:26.419520   26504 pod_ready.go:92] pod "kube-scheduler-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:04:26.419539   26504 pod_ready.go:81] duration metric: took 400.316282ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:04:26.419552   26504 pod_ready.go:38] duration metric: took 2.40098825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:04:26.419574   26504 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:04:26.419625   26504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:04:26.434198   26504 command_runner.go:130] > 1070
	I0907 00:04:26.434227   26504 api_server.go:72] duration metric: took 8.884172725s to wait for apiserver process to appear ...
	I0907 00:04:26.434234   26504 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:04:26.434246   26504 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:04:26.439115   26504 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0907 00:04:26.439176   26504 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I0907 00:04:26.439185   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.439196   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.439203   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.440185   26504 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0907 00:04:26.440201   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.440208   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.440214   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.440220   26504 round_trippers.go:580]     Content-Length: 263
	I0907 00:04:26.440225   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.440232   26504 round_trippers.go:580]     Audit-Id: bda6abfd-4de9-40c5-8a89-89ad65fd62ba
	I0907 00:04:26.440238   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.440244   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.440259   26504 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0907 00:04:26.440338   26504 api_server.go:141] control plane version: v1.28.1
	I0907 00:04:26.440357   26504 api_server.go:131] duration metric: took 6.116126ms to wait for apiserver health ...
	I0907 00:04:26.440365   26504 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:04:26.615779   26504 request.go:629] Waited for 175.349444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:04:26.615845   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:04:26.615853   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.615861   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.615870   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.619856   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:04:26.619879   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.619886   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.619892   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.619898   26504 round_trippers.go:580]     Audit-Id: 2d37df0f-8851-4c78-9cdc-60c1c6443ccd
	I0907 00:04:26.619903   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.619908   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.619913   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.621128   26504 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"446","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0907 00:04:26.622907   26504 system_pods.go:59] 8 kube-system pods found
	I0907 00:04:26.622927   26504 system_pods.go:61] "coredns-5dd5756b68-8ktxh" [c2574ba0-f19a-40c1-a06f-601bb17661f6] Running
	I0907 00:04:26.622933   26504 system_pods.go:61] "etcd-multinode-816061" [7ff498e1-17ed-4818-befa-68a5a69b96d4] Running
	I0907 00:04:26.622937   26504 system_pods.go:61] "kindnet-xgbtc" [137c032b-12d1-4179-8416-0f3cc5733842] Running
	I0907 00:04:26.622942   26504 system_pods.go:61] "kube-apiserver-multinode-816061" [dbbbc2db-98c3-44e3-a18d-947bad7ffda2] Running
	I0907 00:04:26.622950   26504 system_pods.go:61] "kube-controller-manager-multinode-816061" [ea192806-6f42-4471-8e73-ae96aa3bfa06] Running
	I0907 00:04:26.622954   26504 system_pods.go:61] "kube-proxy-tbzlv" [6b9717d8-174b-4713-a941-382c81cc659e] Running
	I0907 00:04:26.622960   26504 system_pods.go:61] "kube-scheduler-multinode-816061" [3fa4fad1-c309-42a9-af5f-28e6398492c7] Running
	I0907 00:04:26.622964   26504 system_pods.go:61] "storage-provisioner" [3ce467f7-aaa1-4391-9bc9-39ef0521ebd2] Running
	I0907 00:04:26.622971   26504 system_pods.go:74] duration metric: took 182.602386ms to wait for pod list to return data ...
	I0907 00:04:26.622980   26504 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:04:26.815310   26504 request.go:629] Waited for 192.264471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I0907 00:04:26.815365   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I0907 00:04:26.815370   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:26.815378   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:26.815385   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:26.817990   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:26.818014   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:26.818025   26504 round_trippers.go:580]     Audit-Id: a3c488d4-57fa-46d9-a9df-b9884f205079
	I0907 00:04:26.818033   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:26.818049   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:26.818057   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:26.818065   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:26.818077   26504 round_trippers.go:580]     Content-Length: 261
	I0907 00:04:26.818087   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:26 GMT
	I0907 00:04:26.818111   26504 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"41407859-a71f-4f4f-b9db-b147bd408b48","resourceVersion":"353","creationTimestamp":"2023-09-07T00:04:17Z"}}]}
	I0907 00:04:26.818292   26504 default_sa.go:45] found service account: "default"
	I0907 00:04:26.818306   26504 default_sa.go:55] duration metric: took 195.321047ms for default service account to be created ...
	I0907 00:04:26.818316   26504 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:04:27.015756   26504 request.go:629] Waited for 197.377919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:04:27.015811   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:04:27.015815   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:27.015823   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:27.015829   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:27.020050   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:04:27.020075   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:27.020086   26504 round_trippers.go:580]     Audit-Id: f08763fb-4ba9-427e-8f6f-a57d6da2f096
	I0907 00:04:27.020093   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:27.020099   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:27.020104   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:27.020109   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:27.020115   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:27 GMT
	I0907 00:04:27.021548   26504 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"446","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0907 00:04:27.023217   26504 system_pods.go:86] 8 kube-system pods found
	I0907 00:04:27.023233   26504 system_pods.go:89] "coredns-5dd5756b68-8ktxh" [c2574ba0-f19a-40c1-a06f-601bb17661f6] Running
	I0907 00:04:27.023238   26504 system_pods.go:89] "etcd-multinode-816061" [7ff498e1-17ed-4818-befa-68a5a69b96d4] Running
	I0907 00:04:27.023242   26504 system_pods.go:89] "kindnet-xgbtc" [137c032b-12d1-4179-8416-0f3cc5733842] Running
	I0907 00:04:27.023246   26504 system_pods.go:89] "kube-apiserver-multinode-816061" [dbbbc2db-98c3-44e3-a18d-947bad7ffda2] Running
	I0907 00:04:27.023252   26504 system_pods.go:89] "kube-controller-manager-multinode-816061" [ea192806-6f42-4471-8e73-ae96aa3bfa06] Running
	I0907 00:04:27.023255   26504 system_pods.go:89] "kube-proxy-tbzlv" [6b9717d8-174b-4713-a941-382c81cc659e] Running
	I0907 00:04:27.023259   26504 system_pods.go:89] "kube-scheduler-multinode-816061" [3fa4fad1-c309-42a9-af5f-28e6398492c7] Running
	I0907 00:04:27.023263   26504 system_pods.go:89] "storage-provisioner" [3ce467f7-aaa1-4391-9bc9-39ef0521ebd2] Running
	I0907 00:04:27.023272   26504 system_pods.go:126] duration metric: took 204.951961ms to wait for k8s-apps to be running ...
	I0907 00:04:27.023278   26504 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:04:27.023316   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:04:27.039331   26504 system_svc.go:56] duration metric: took 16.043422ms WaitForService to wait for kubelet.
	I0907 00:04:27.039354   26504 kubeadm.go:581] duration metric: took 9.489298325s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:04:27.039371   26504 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:04:27.215812   26504 request.go:629] Waited for 176.38119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I0907 00:04:27.215879   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:04:27.215884   26504 round_trippers.go:469] Request Headers:
	I0907 00:04:27.215895   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:04:27.215901   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:04:27.218531   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:04:27.218547   26504 round_trippers.go:577] Response Headers:
	I0907 00:04:27.218553   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:04:27 GMT
	I0907 00:04:27.218559   26504 round_trippers.go:580]     Audit-Id: 262fadcb-8a7c-48b7-aefe-3e0e397b2120
	I0907 00:04:27.218564   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:04:27.218569   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:04:27.218574   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:04:27.218579   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:04:27.218886   26504 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0907 00:04:27.219226   26504 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:04:27.219243   26504 node_conditions.go:123] node cpu capacity is 2
	I0907 00:04:27.219251   26504 node_conditions.go:105] duration metric: took 179.876981ms to run NodePressure ...
	I0907 00:04:27.219260   26504 start.go:228] waiting for startup goroutines ...
	I0907 00:04:27.219266   26504 start.go:233] waiting for cluster config update ...
	I0907 00:04:27.219274   26504 start.go:242] writing updated cluster config ...
	I0907 00:04:27.221696   26504 out.go:177] 
	I0907 00:04:27.223181   26504 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:04:27.223250   26504 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:04:27.224880   26504 out.go:177] * Starting worker node multinode-816061-m02 in cluster multinode-816061
	I0907 00:04:27.226195   26504 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:04:27.226212   26504 cache.go:57] Caching tarball of preloaded images
	I0907 00:04:27.226312   26504 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:04:27.226325   26504 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:04:27.226390   26504 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:04:27.226525   26504 start.go:365] acquiring machines lock for multinode-816061-m02: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:04:27.226565   26504 start.go:369] acquired machines lock for "multinode-816061-m02" in 20.404µs
	I0907 00:04:27.226602   26504 start.go:93] Provisioning new machine with config: &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:04:27.226660   26504 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0907 00:04:27.228310   26504 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0907 00:04:27.228385   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:04:27.228419   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:04:27.242144   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46017
	I0907 00:04:27.242555   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:04:27.243116   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:04:27.243145   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:04:27.243426   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:04:27.243662   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:04:27.243787   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:27.243939   26504 start.go:159] libmachine.API.Create for "multinode-816061" (driver="kvm2")
	I0907 00:04:27.243966   26504 client.go:168] LocalClient.Create starting
	I0907 00:04:27.243989   26504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 00:04:27.244015   26504 main.go:141] libmachine: Decoding PEM data...
	I0907 00:04:27.244030   26504 main.go:141] libmachine: Parsing certificate...
	I0907 00:04:27.244092   26504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 00:04:27.244117   26504 main.go:141] libmachine: Decoding PEM data...
	I0907 00:04:27.244126   26504 main.go:141] libmachine: Parsing certificate...
	I0907 00:04:27.244141   26504 main.go:141] libmachine: Running pre-create checks...
	I0907 00:04:27.244148   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .PreCreateCheck
	I0907 00:04:27.244399   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetConfigRaw
	I0907 00:04:27.244783   26504 main.go:141] libmachine: Creating machine...
	I0907 00:04:27.244799   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .Create
	I0907 00:04:27.244947   26504 main.go:141] libmachine: (multinode-816061-m02) Creating KVM machine...
	I0907 00:04:27.245977   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found existing default KVM network
	I0907 00:04:27.246095   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found existing private KVM network mk-multinode-816061
	I0907 00:04:27.246283   26504 main.go:141] libmachine: (multinode-816061-m02) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02 ...
	I0907 00:04:27.246310   26504 main.go:141] libmachine: (multinode-816061-m02) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 00:04:27.246345   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:27.246244   26864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:04:27.246428   26504 main.go:141] libmachine: (multinode-816061-m02) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 00:04:27.437725   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:27.437592   26864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa...
	I0907 00:04:27.525328   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:27.525215   26864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/multinode-816061-m02.rawdisk...
	I0907 00:04:27.525359   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Writing magic tar header
	I0907 00:04:27.525371   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Writing SSH key tar header
	I0907 00:04:27.525380   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:27.525350   26864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02 ...
	I0907 00:04:27.525512   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02
	I0907 00:04:27.525539   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 00:04:27.525556   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:04:27.525566   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 00:04:27.525577   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:04:27.525587   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home/jenkins
	I0907 00:04:27.525603   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02 (perms=drwx------)
	I0907 00:04:27.525618   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:04:27.525634   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 00:04:27.525650   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 00:04:27.525663   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:04:27.525677   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Checking permissions on dir: /home
	I0907 00:04:27.525688   26504 main.go:141] libmachine: (multinode-816061-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:04:27.525701   26504 main.go:141] libmachine: (multinode-816061-m02) Creating domain...
	I0907 00:04:27.525715   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Skipping /home - not owner
	I0907 00:04:27.526657   26504 main.go:141] libmachine: (multinode-816061-m02) define libvirt domain using xml: 
	I0907 00:04:27.526681   26504 main.go:141] libmachine: (multinode-816061-m02) <domain type='kvm'>
	I0907 00:04:27.526692   26504 main.go:141] libmachine: (multinode-816061-m02)   <name>multinode-816061-m02</name>
	I0907 00:04:27.526703   26504 main.go:141] libmachine: (multinode-816061-m02)   <memory unit='MiB'>2200</memory>
	I0907 00:04:27.526726   26504 main.go:141] libmachine: (multinode-816061-m02)   <vcpu>2</vcpu>
	I0907 00:04:27.526743   26504 main.go:141] libmachine: (multinode-816061-m02)   <features>
	I0907 00:04:27.526752   26504 main.go:141] libmachine: (multinode-816061-m02)     <acpi/>
	I0907 00:04:27.526758   26504 main.go:141] libmachine: (multinode-816061-m02)     <apic/>
	I0907 00:04:27.526767   26504 main.go:141] libmachine: (multinode-816061-m02)     <pae/>
	I0907 00:04:27.526773   26504 main.go:141] libmachine: (multinode-816061-m02)     
	I0907 00:04:27.526809   26504 main.go:141] libmachine: (multinode-816061-m02)   </features>
	I0907 00:04:27.526833   26504 main.go:141] libmachine: (multinode-816061-m02)   <cpu mode='host-passthrough'>
	I0907 00:04:27.526844   26504 main.go:141] libmachine: (multinode-816061-m02)   
	I0907 00:04:27.526857   26504 main.go:141] libmachine: (multinode-816061-m02)   </cpu>
	I0907 00:04:27.526868   26504 main.go:141] libmachine: (multinode-816061-m02)   <os>
	I0907 00:04:27.526876   26504 main.go:141] libmachine: (multinode-816061-m02)     <type>hvm</type>
	I0907 00:04:27.526882   26504 main.go:141] libmachine: (multinode-816061-m02)     <boot dev='cdrom'/>
	I0907 00:04:27.526892   26504 main.go:141] libmachine: (multinode-816061-m02)     <boot dev='hd'/>
	I0907 00:04:27.526905   26504 main.go:141] libmachine: (multinode-816061-m02)     <bootmenu enable='no'/>
	I0907 00:04:27.526920   26504 main.go:141] libmachine: (multinode-816061-m02)   </os>
	I0907 00:04:27.526934   26504 main.go:141] libmachine: (multinode-816061-m02)   <devices>
	I0907 00:04:27.526947   26504 main.go:141] libmachine: (multinode-816061-m02)     <disk type='file' device='cdrom'>
	I0907 00:04:27.526965   26504 main.go:141] libmachine: (multinode-816061-m02)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/boot2docker.iso'/>
	I0907 00:04:27.526981   26504 main.go:141] libmachine: (multinode-816061-m02)       <target dev='hdc' bus='scsi'/>
	I0907 00:04:27.527000   26504 main.go:141] libmachine: (multinode-816061-m02)       <readonly/>
	I0907 00:04:27.527015   26504 main.go:141] libmachine: (multinode-816061-m02)     </disk>
	I0907 00:04:27.527030   26504 main.go:141] libmachine: (multinode-816061-m02)     <disk type='file' device='disk'>
	I0907 00:04:27.527045   26504 main.go:141] libmachine: (multinode-816061-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:04:27.527060   26504 main.go:141] libmachine: (multinode-816061-m02)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/multinode-816061-m02.rawdisk'/>
	I0907 00:04:27.527066   26504 main.go:141] libmachine: (multinode-816061-m02)       <target dev='hda' bus='virtio'/>
	I0907 00:04:27.527075   26504 main.go:141] libmachine: (multinode-816061-m02)     </disk>
	I0907 00:04:27.527081   26504 main.go:141] libmachine: (multinode-816061-m02)     <interface type='network'>
	I0907 00:04:27.527101   26504 main.go:141] libmachine: (multinode-816061-m02)       <source network='mk-multinode-816061'/>
	I0907 00:04:27.527120   26504 main.go:141] libmachine: (multinode-816061-m02)       <model type='virtio'/>
	I0907 00:04:27.527130   26504 main.go:141] libmachine: (multinode-816061-m02)     </interface>
	I0907 00:04:27.527135   26504 main.go:141] libmachine: (multinode-816061-m02)     <interface type='network'>
	I0907 00:04:27.527143   26504 main.go:141] libmachine: (multinode-816061-m02)       <source network='default'/>
	I0907 00:04:27.527149   26504 main.go:141] libmachine: (multinode-816061-m02)       <model type='virtio'/>
	I0907 00:04:27.527155   26504 main.go:141] libmachine: (multinode-816061-m02)     </interface>
	I0907 00:04:27.527166   26504 main.go:141] libmachine: (multinode-816061-m02)     <serial type='pty'>
	I0907 00:04:27.527175   26504 main.go:141] libmachine: (multinode-816061-m02)       <target port='0'/>
	I0907 00:04:27.527183   26504 main.go:141] libmachine: (multinode-816061-m02)     </serial>
	I0907 00:04:27.527196   26504 main.go:141] libmachine: (multinode-816061-m02)     <console type='pty'>
	I0907 00:04:27.527211   26504 main.go:141] libmachine: (multinode-816061-m02)       <target type='serial' port='0'/>
	I0907 00:04:27.527227   26504 main.go:141] libmachine: (multinode-816061-m02)     </console>
	I0907 00:04:27.527238   26504 main.go:141] libmachine: (multinode-816061-m02)     <rng model='virtio'>
	I0907 00:04:27.527253   26504 main.go:141] libmachine: (multinode-816061-m02)       <backend model='random'>/dev/random</backend>
	I0907 00:04:27.527267   26504 main.go:141] libmachine: (multinode-816061-m02)     </rng>
	I0907 00:04:27.527280   26504 main.go:141] libmachine: (multinode-816061-m02)     
	I0907 00:04:27.527295   26504 main.go:141] libmachine: (multinode-816061-m02)     
	I0907 00:04:27.527307   26504 main.go:141] libmachine: (multinode-816061-m02)   </devices>
	I0907 00:04:27.527318   26504 main.go:141] libmachine: (multinode-816061-m02) </domain>
	I0907 00:04:27.527333   26504 main.go:141] libmachine: (multinode-816061-m02) 
	I0907 00:04:27.534407   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:83:6c:d9 in network default
	I0907 00:04:27.535102   26504 main.go:141] libmachine: (multinode-816061-m02) Ensuring networks are active...
	I0907 00:04:27.535124   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:27.535919   26504 main.go:141] libmachine: (multinode-816061-m02) Ensuring network default is active
	I0907 00:04:27.536255   26504 main.go:141] libmachine: (multinode-816061-m02) Ensuring network mk-multinode-816061 is active
	I0907 00:04:27.536603   26504 main.go:141] libmachine: (multinode-816061-m02) Getting domain xml...
	I0907 00:04:27.537335   26504 main.go:141] libmachine: (multinode-816061-m02) Creating domain...
	I0907 00:04:28.756119   26504 main.go:141] libmachine: (multinode-816061-m02) Waiting to get IP...
	I0907 00:04:28.756912   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:28.757276   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:28.757306   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:28.757262   26864 retry.go:31] will retry after 280.919423ms: waiting for machine to come up
	I0907 00:04:29.039971   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:29.040384   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:29.040417   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:29.040333   26864 retry.go:31] will retry after 315.120741ms: waiting for machine to come up
	I0907 00:04:29.356763   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:29.357185   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:29.357238   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:29.357122   26864 retry.go:31] will retry after 472.270757ms: waiting for machine to come up
	I0907 00:04:29.830662   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:29.831125   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:29.831155   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:29.831100   26864 retry.go:31] will retry after 470.662216ms: waiting for machine to come up
	I0907 00:04:30.303774   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:30.304170   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:30.304192   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:30.304130   26864 retry.go:31] will retry after 483.424188ms: waiting for machine to come up
	I0907 00:04:30.788794   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:30.789304   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:30.789327   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:30.789249   26864 retry.go:31] will retry after 928.630644ms: waiting for machine to come up
	I0907 00:04:31.719180   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:31.719666   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:31.719691   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:31.719597   26864 retry.go:31] will retry after 717.896887ms: waiting for machine to come up
	I0907 00:04:32.438838   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:32.439223   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:32.439246   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:32.439198   26864 retry.go:31] will retry after 1.402537077s: waiting for machine to come up
	I0907 00:04:33.842875   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:33.843306   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:33.843334   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:33.843243   26864 retry.go:31] will retry after 1.200235013s: waiting for machine to come up
	I0907 00:04:35.045887   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:35.046375   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:35.046393   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:35.046348   26864 retry.go:31] will retry after 2.220499266s: waiting for machine to come up
	I0907 00:04:37.268757   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:37.269289   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:37.269324   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:37.269229   26864 retry.go:31] will retry after 2.319656162s: waiting for machine to come up
	I0907 00:04:39.591140   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:39.591548   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:39.591575   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:39.591498   26864 retry.go:31] will retry after 2.200130054s: waiting for machine to come up
	I0907 00:04:41.792760   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:41.793195   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:41.793226   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:41.793125   26864 retry.go:31] will retry after 4.174088352s: waiting for machine to come up
	I0907 00:04:45.971855   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:45.972244   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find current IP address of domain multinode-816061-m02 in network mk-multinode-816061
	I0907 00:04:45.972272   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | I0907 00:04:45.972202   26864 retry.go:31] will retry after 5.336433259s: waiting for machine to come up
	I0907 00:04:51.311959   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.312460   26504 main.go:141] libmachine: (multinode-816061-m02) Found IP for machine: 192.168.39.44
	I0907 00:04:51.312484   26504 main.go:141] libmachine: (multinode-816061-m02) Reserving static IP address...
	I0907 00:04:51.312500   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has current primary IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.312861   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | unable to find host DHCP lease matching {name: "multinode-816061-m02", mac: "52:54:00:72:a5:bb", ip: "192.168.39.44"} in network mk-multinode-816061
	I0907 00:04:51.386299   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Getting to WaitForSSH function...
	I0907 00:04:51.386338   26504 main.go:141] libmachine: (multinode-816061-m02) Reserved static IP address: 192.168.39.44
	I0907 00:04:51.386354   26504 main.go:141] libmachine: (multinode-816061-m02) Waiting for SSH to be available...
	I0907 00:04:51.389106   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.389501   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.389522   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.389726   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Using SSH client type: external
	I0907 00:04:51.389755   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa (-rw-------)
	I0907 00:04:51.389788   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:04:51.389805   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | About to run SSH command:
	I0907 00:04:51.389822   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | exit 0
	I0907 00:04:51.478846   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | SSH cmd err, output: <nil>: 
	I0907 00:04:51.479081   26504 main.go:141] libmachine: (multinode-816061-m02) KVM machine creation complete!
	I0907 00:04:51.479473   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetConfigRaw
	I0907 00:04:51.480038   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:51.480263   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:51.480477   26504 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0907 00:04:51.480497   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetState
	I0907 00:04:51.481611   26504 main.go:141] libmachine: Detecting operating system of created instance...
	I0907 00:04:51.481628   26504 main.go:141] libmachine: Waiting for SSH to be available...
	I0907 00:04:51.481637   26504 main.go:141] libmachine: Getting to WaitForSSH function...
	I0907 00:04:51.481647   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:51.484176   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.484551   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.484587   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.484745   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:51.484891   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.485051   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.485209   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:51.485364   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:51.485784   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:51.485797   26504 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0907 00:04:51.598087   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:04:51.598108   26504 main.go:141] libmachine: Detecting the provisioner...
	I0907 00:04:51.598116   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:51.601794   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.602556   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.602592   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.602719   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:51.602923   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.603094   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.603276   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:51.603469   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:51.603843   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:51.603854   26504 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0907 00:04:51.719996   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0907 00:04:51.720055   26504 main.go:141] libmachine: found compatible host: buildroot
	I0907 00:04:51.720066   26504 main.go:141] libmachine: Provisioning with buildroot...
	I0907 00:04:51.720077   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:04:51.720360   26504 buildroot.go:166] provisioning hostname "multinode-816061-m02"
	I0907 00:04:51.720383   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:04:51.720564   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:51.723192   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.723630   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.723659   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.723783   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:51.723946   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.724111   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.724275   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:51.724443   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:51.724876   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:51.724892   26504 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061-m02 && echo "multinode-816061-m02" | sudo tee /etc/hostname
	I0907 00:04:51.854651   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-816061-m02
	
	I0907 00:04:51.854675   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:51.857809   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.858306   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.858339   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.858543   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:51.858720   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.858882   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:51.859014   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:51.859163   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:51.859578   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:51.859597   26504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-816061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-816061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-816061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:04:51.984071   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:04:51.984109   26504 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:04:51.984132   26504 buildroot.go:174] setting up certificates
	I0907 00:04:51.984152   26504 provision.go:83] configureAuth start
	I0907 00:04:51.984173   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:04:51.984443   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:04:51.987364   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.987694   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.987728   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.987895   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:51.989877   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.990270   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:51.990294   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:51.990410   26504 provision.go:138] copyHostCerts
	I0907 00:04:51.990440   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:04:51.990473   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:04:51.990485   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:04:51.990575   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:04:51.990661   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:04:51.990684   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:04:51.990692   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:04:51.990726   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:04:51.991043   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:04:51.991073   26504 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:04:51.991080   26504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:04:51.991123   26504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:04:51.991207   26504 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.multinode-816061-m02 san=[192.168.39.44 192.168.39.44 localhost 127.0.0.1 minikube multinode-816061-m02]
	I0907 00:04:52.144205   26504 provision.go:172] copyRemoteCerts
	I0907 00:04:52.144258   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:04:52.144281   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:52.147307   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.147628   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.147662   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.147934   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.148134   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.148290   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.148488   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:04:52.236988   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0907 00:04:52.237052   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0907 00:04:52.260822   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0907 00:04:52.260883   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:04:52.283833   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0907 00:04:52.283903   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:04:52.306206   26504 provision.go:86] duration metric: configureAuth took 322.036863ms
	I0907 00:04:52.306232   26504 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:04:52.306467   26504 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:04:52.306552   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:52.309416   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.309815   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.309850   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.310061   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.310281   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.310490   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.310685   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.310838   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:52.311401   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:52.311424   26504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:04:52.615830   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:04:52.615862   26504 main.go:141] libmachine: Checking connection to Docker...
	I0907 00:04:52.615874   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetURL
	I0907 00:04:52.617284   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | Using libvirt version 6000000
	I0907 00:04:52.619731   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.620138   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.620168   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.620389   26504 main.go:141] libmachine: Docker is up and running!
	I0907 00:04:52.620409   26504 main.go:141] libmachine: Reticulating splines...
	I0907 00:04:52.620417   26504 client.go:171] LocalClient.Create took 25.376443817s
	I0907 00:04:52.620442   26504 start.go:167] duration metric: libmachine.API.Create for "multinode-816061" took 25.376506021s
	I0907 00:04:52.620455   26504 start.go:300] post-start starting for "multinode-816061-m02" (driver="kvm2")
	I0907 00:04:52.620466   26504 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:04:52.620489   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:52.620710   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:04:52.620740   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:52.623038   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.623387   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.623416   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.623570   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.623750   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.623931   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.624110   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:04:52.712374   26504 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:04:52.717009   26504 command_runner.go:130] > NAME=Buildroot
	I0907 00:04:52.717028   26504 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0907 00:04:52.717032   26504 command_runner.go:130] > ID=buildroot
	I0907 00:04:52.717037   26504 command_runner.go:130] > VERSION_ID=2021.02.12
	I0907 00:04:52.717041   26504 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0907 00:04:52.717066   26504 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:04:52.717075   26504 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:04:52.717138   26504 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:04:52.717206   26504 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:04:52.717214   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0907 00:04:52.717299   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:04:52.725902   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:04:52.750328   26504 start.go:303] post-start completed in 129.858617ms
	I0907 00:04:52.750373   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetConfigRaw
	I0907 00:04:52.751062   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:04:52.753842   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.754269   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.754304   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.754630   26504 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:04:52.754907   26504 start.go:128] duration metric: createHost completed in 25.528235973s
	I0907 00:04:52.754938   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:52.757817   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.758211   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.758231   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.758401   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.758594   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.758766   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.758924   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.759068   26504 main.go:141] libmachine: Using SSH client type: native
	I0907 00:04:52.759448   26504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:04:52.759459   26504 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:04:52.875864   26504 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694045092.861074998
	
	I0907 00:04:52.875883   26504 fix.go:206] guest clock: 1694045092.861074998
	I0907 00:04:52.875893   26504 fix.go:219] Guest: 2023-09-07 00:04:52.861074998 +0000 UTC Remote: 2023-09-07 00:04:52.754922082 +0000 UTC m=+94.125640998 (delta=106.152916ms)
	I0907 00:04:52.875910   26504 fix.go:190] guest clock delta is within tolerance: 106.152916ms
	I0907 00:04:52.875916   26504 start.go:83] releasing machines lock for "multinode-816061-m02", held for 25.649327214s
	I0907 00:04:52.875936   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:52.876187   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:04:52.878705   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.879066   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.879095   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.881401   26504 out.go:177] * Found network options:
	I0907 00:04:52.882905   26504 out.go:177]   - NO_PROXY=192.168.39.212
	W0907 00:04:52.884285   26504 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:04:52.884323   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:52.884749   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:52.884927   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:04:52.885003   26504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:04:52.885041   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	W0907 00:04:52.885096   26504 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:04:52.885162   26504 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:04:52.885186   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:04:52.887631   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.887899   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.888000   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.888038   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.888153   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.888292   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:52.888316   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:52.888317   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.888473   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:04:52.888490   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.888685   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:04:52.888695   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:04:52.888826   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:04:52.888942   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:04:52.990481   26504 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0907 00:04:53.129271   26504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:04:53.135076   26504 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0907 00:04:53.135117   26504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:04:53.135176   26504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:04:53.149854   26504 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0907 00:04:53.149897   26504 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:04:53.149907   26504 start.go:466] detecting cgroup driver to use...
	I0907 00:04:53.149968   26504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:04:53.163778   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:04:53.175896   26504 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:04:53.175962   26504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:04:53.188296   26504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:04:53.201080   26504 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:04:53.216349   26504 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0907 00:04:53.310827   26504 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:04:53.325415   26504 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0907 00:04:53.436872   26504 docker.go:212] disabling docker service ...
	I0907 00:04:53.436936   26504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:04:53.451872   26504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:04:53.464195   26504 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0907 00:04:53.464283   26504 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:04:53.586832   26504 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0907 00:04:53.586961   26504 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:04:53.711852   26504 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0907 00:04:53.711885   26504 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0907 00:04:53.711952   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:04:53.725171   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:04:53.742803   26504 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0907 00:04:53.742846   26504 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:04:53.742897   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:04:53.752899   26504 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:04:53.752955   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:04:53.762998   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:04:53.772702   26504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:04:53.782797   26504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:04:53.792852   26504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:04:53.801667   26504 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:04:53.801755   26504 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:04:53.801820   26504 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:04:53.815455   26504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:04:53.824329   26504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:04:53.942087   26504 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:04:54.118719   26504 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:04:54.118813   26504 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:04:54.124257   26504 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0907 00:04:54.124286   26504 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0907 00:04:54.124296   26504 command_runner.go:130] > Device: 16h/22d	Inode: 713         Links: 1
	I0907 00:04:54.124307   26504 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:04:54.124314   26504 command_runner.go:130] > Access: 2023-09-07 00:04:54.091524914 +0000
	I0907 00:04:54.124323   26504 command_runner.go:130] > Modify: 2023-09-07 00:04:54.091524914 +0000
	I0907 00:04:54.124330   26504 command_runner.go:130] > Change: 2023-09-07 00:04:54.091524914 +0000
	I0907 00:04:54.124336   26504 command_runner.go:130] >  Birth: -
	I0907 00:04:54.124358   26504 start.go:534] Will wait 60s for crictl version
	I0907 00:04:54.124415   26504 ssh_runner.go:195] Run: which crictl
	I0907 00:04:54.128603   26504 command_runner.go:130] > /usr/bin/crictl
	I0907 00:04:54.128718   26504 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:04:54.160365   26504 command_runner.go:130] > Version:  0.1.0
	I0907 00:04:54.160659   26504 command_runner.go:130] > RuntimeName:  cri-o
	I0907 00:04:54.160752   26504 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0907 00:04:54.160817   26504 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0907 00:04:54.163489   26504 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:04:54.163548   26504 ssh_runner.go:195] Run: crio --version
	I0907 00:04:54.212333   26504 command_runner.go:130] > crio version 1.24.1
	I0907 00:04:54.212355   26504 command_runner.go:130] > Version:          1.24.1
	I0907 00:04:54.212365   26504 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:04:54.212371   26504 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:04:54.212394   26504 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:04:54.212403   26504 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:04:54.212410   26504 command_runner.go:130] > Compiler:         gc
	I0907 00:04:54.212421   26504 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:04:54.212441   26504 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:04:54.212456   26504 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:04:54.212462   26504 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:04:54.212468   26504 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:04:54.212682   26504 ssh_runner.go:195] Run: crio --version
	I0907 00:04:54.256432   26504 command_runner.go:130] > crio version 1.24.1
	I0907 00:04:54.256461   26504 command_runner.go:130] > Version:          1.24.1
	I0907 00:04:54.256472   26504 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:04:54.256479   26504 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:04:54.256489   26504 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:04:54.256497   26504 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:04:54.256504   26504 command_runner.go:130] > Compiler:         gc
	I0907 00:04:54.256511   26504 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:04:54.256524   26504 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:04:54.256539   26504 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:04:54.256545   26504 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:04:54.256553   26504 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:04:54.259680   26504 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:04:54.261303   26504 out.go:177]   - env NO_PROXY=192.168.39.212
	I0907 00:04:54.262597   26504 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:04:54.265300   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:54.265686   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:04:54.265719   26504 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:04:54.265926   26504 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:04:54.270560   26504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:04:54.283530   26504 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061 for IP: 192.168.39.44
	I0907 00:04:54.283557   26504 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:04:54.283717   26504 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:04:54.283772   26504 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:04:54.283787   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0907 00:04:54.283805   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0907 00:04:54.283829   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0907 00:04:54.283843   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0907 00:04:54.283905   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:04:54.283940   26504 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:04:54.283954   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:04:54.283990   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:04:54.284026   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:04:54.284058   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:04:54.284114   26504 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:04:54.284198   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0907 00:04:54.284256   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0907 00:04:54.284278   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:04:54.284774   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:04:54.308677   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:04:54.332952   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:04:54.358090   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:04:54.383516   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:04:54.406499   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:04:54.427903   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:04:54.451212   26504 ssh_runner.go:195] Run: openssl version
	I0907 00:04:54.456737   26504 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0907 00:04:54.456819   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:04:54.466512   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:04:54.471351   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:04:54.471495   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:04:54.471548   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:04:54.476859   26504 command_runner.go:130] > 3ec20f2e
	I0907 00:04:54.476933   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:04:54.486846   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:04:54.497346   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:04:54.502235   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:04:54.502537   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:04:54.502599   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:04:54.508374   26504 command_runner.go:130] > b5213941
	I0907 00:04:54.508604   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:04:54.518713   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:04:54.529161   26504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:04:54.533583   26504 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:04:54.533601   26504 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:04:54.533645   26504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:04:54.539222   26504 command_runner.go:130] > 51391683
	I0907 00:04:54.539359   26504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:04:54.550440   26504 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:04:54.554978   26504 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:04:54.555013   26504 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:04:54.555097   26504 ssh_runner.go:195] Run: crio config
	I0907 00:04:54.613922   26504 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0907 00:04:54.613949   26504 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0907 00:04:54.613962   26504 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0907 00:04:54.613966   26504 command_runner.go:130] > #
	I0907 00:04:54.613978   26504 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0907 00:04:54.614005   26504 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0907 00:04:54.614019   26504 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0907 00:04:54.614030   26504 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0907 00:04:54.614037   26504 command_runner.go:130] > # reload'.
	I0907 00:04:54.614051   26504 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0907 00:04:54.614061   26504 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0907 00:04:54.614067   26504 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0907 00:04:54.614072   26504 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0907 00:04:54.614076   26504 command_runner.go:130] > [crio]
	I0907 00:04:54.614082   26504 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0907 00:04:54.614089   26504 command_runner.go:130] > # containers images, in this directory.
	I0907 00:04:54.614100   26504 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0907 00:04:54.614112   26504 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0907 00:04:54.614123   26504 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0907 00:04:54.614135   26504 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0907 00:04:54.614148   26504 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0907 00:04:54.614160   26504 command_runner.go:130] > storage_driver = "overlay"
	I0907 00:04:54.614169   26504 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0907 00:04:54.614188   26504 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0907 00:04:54.614198   26504 command_runner.go:130] > storage_option = [
	I0907 00:04:54.614206   26504 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0907 00:04:54.614217   26504 command_runner.go:130] > ]
	I0907 00:04:54.614228   26504 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0907 00:04:54.614241   26504 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0907 00:04:54.614252   26504 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0907 00:04:54.614264   26504 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0907 00:04:54.614276   26504 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0907 00:04:54.614284   26504 command_runner.go:130] > # always happen on a node reboot
	I0907 00:04:54.614296   26504 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0907 00:04:54.614305   26504 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0907 00:04:54.614316   26504 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0907 00:04:54.614344   26504 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0907 00:04:54.614357   26504 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0907 00:04:54.614373   26504 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0907 00:04:54.614389   26504 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0907 00:04:54.614399   26504 command_runner.go:130] > # internal_wipe = true
	I0907 00:04:54.614410   26504 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0907 00:04:54.614424   26504 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0907 00:04:54.614439   26504 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0907 00:04:54.614452   26504 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0907 00:04:54.614465   26504 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0907 00:04:54.614474   26504 command_runner.go:130] > [crio.api]
	I0907 00:04:54.614483   26504 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0907 00:04:54.614515   26504 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0907 00:04:54.614524   26504 command_runner.go:130] > # IP address on which the stream server will listen.
	I0907 00:04:54.614528   26504 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0907 00:04:54.614537   26504 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0907 00:04:54.614542   26504 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0907 00:04:54.614548   26504 command_runner.go:130] > # stream_port = "0"
	I0907 00:04:54.614554   26504 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0907 00:04:54.614560   26504 command_runner.go:130] > # stream_enable_tls = false
	I0907 00:04:54.614566   26504 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0907 00:04:54.614599   26504 command_runner.go:130] > # stream_idle_timeout = ""
	I0907 00:04:54.614614   26504 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0907 00:04:54.614628   26504 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0907 00:04:54.614637   26504 command_runner.go:130] > # minutes.
	I0907 00:04:54.614644   26504 command_runner.go:130] > # stream_tls_cert = ""
	I0907 00:04:54.614657   26504 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0907 00:04:54.614670   26504 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0907 00:04:54.614677   26504 command_runner.go:130] > # stream_tls_key = ""
	I0907 00:04:54.614683   26504 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0907 00:04:54.614691   26504 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0907 00:04:54.614696   26504 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0907 00:04:54.614703   26504 command_runner.go:130] > # stream_tls_ca = ""
	I0907 00:04:54.614715   26504 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:04:54.614725   26504 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0907 00:04:54.614740   26504 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:04:54.614751   26504 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0907 00:04:54.614800   26504 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0907 00:04:54.614814   26504 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0907 00:04:54.614820   26504 command_runner.go:130] > [crio.runtime]
	I0907 00:04:54.614831   26504 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0907 00:04:54.614845   26504 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0907 00:04:54.614860   26504 command_runner.go:130] > # "nofile=1024:2048"
	I0907 00:04:54.614873   26504 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0907 00:04:54.614883   26504 command_runner.go:130] > # default_ulimits = [
	I0907 00:04:54.614887   26504 command_runner.go:130] > # ]
	I0907 00:04:54.614893   26504 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0907 00:04:54.614900   26504 command_runner.go:130] > # no_pivot = false
	I0907 00:04:54.614905   26504 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0907 00:04:54.614918   26504 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0907 00:04:54.614928   26504 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0907 00:04:54.614941   26504 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0907 00:04:54.614953   26504 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0907 00:04:54.614964   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:04:54.614975   26504 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0907 00:04:54.614982   26504 command_runner.go:130] > # Cgroup setting for conmon
	I0907 00:04:54.615000   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0907 00:04:54.615011   26504 command_runner.go:130] > conmon_cgroup = "pod"
	I0907 00:04:54.615022   26504 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0907 00:04:54.615034   26504 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0907 00:04:54.615048   26504 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:04:54.615058   26504 command_runner.go:130] > conmon_env = [
	I0907 00:04:54.615068   26504 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0907 00:04:54.615076   26504 command_runner.go:130] > ]
	I0907 00:04:54.615086   26504 command_runner.go:130] > # Additional environment variables to set for all the
	I0907 00:04:54.615096   26504 command_runner.go:130] > # containers. These are overridden if set in the
	I0907 00:04:54.615105   26504 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0907 00:04:54.615114   26504 command_runner.go:130] > # default_env = [
	I0907 00:04:54.615120   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615130   26504 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0907 00:04:54.615139   26504 command_runner.go:130] > # selinux = false
	I0907 00:04:54.615150   26504 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0907 00:04:54.615163   26504 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0907 00:04:54.615174   26504 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0907 00:04:54.615181   26504 command_runner.go:130] > # seccomp_profile = ""
	I0907 00:04:54.615191   26504 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0907 00:04:54.615203   26504 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0907 00:04:54.615218   26504 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0907 00:04:54.615228   26504 command_runner.go:130] > # which might increase security.
	I0907 00:04:54.615238   26504 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0907 00:04:54.615247   26504 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0907 00:04:54.615260   26504 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0907 00:04:54.615273   26504 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0907 00:04:54.615287   26504 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0907 00:04:54.615297   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:04:54.615307   26504 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0907 00:04:54.615320   26504 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0907 00:04:54.615328   26504 command_runner.go:130] > # the cgroup blockio controller.
	I0907 00:04:54.615336   26504 command_runner.go:130] > # blockio_config_file = ""
	I0907 00:04:54.615349   26504 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0907 00:04:54.615358   26504 command_runner.go:130] > # irqbalance daemon.
	I0907 00:04:54.615392   26504 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0907 00:04:54.615401   26504 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0907 00:04:54.615406   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:04:54.615418   26504 command_runner.go:130] > # rdt_config_file = ""
	I0907 00:04:54.615424   26504 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0907 00:04:54.615428   26504 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0907 00:04:54.615434   26504 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0907 00:04:54.615441   26504 command_runner.go:130] > # separate_pull_cgroup = ""
	I0907 00:04:54.615447   26504 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0907 00:04:54.615456   26504 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0907 00:04:54.615460   26504 command_runner.go:130] > # will be added.
	I0907 00:04:54.615464   26504 command_runner.go:130] > # default_capabilities = [
	I0907 00:04:54.615470   26504 command_runner.go:130] > # 	"CHOWN",
	I0907 00:04:54.615474   26504 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0907 00:04:54.615478   26504 command_runner.go:130] > # 	"FSETID",
	I0907 00:04:54.615481   26504 command_runner.go:130] > # 	"FOWNER",
	I0907 00:04:54.615488   26504 command_runner.go:130] > # 	"SETGID",
	I0907 00:04:54.615500   26504 command_runner.go:130] > # 	"SETUID",
	I0907 00:04:54.615504   26504 command_runner.go:130] > # 	"SETPCAP",
	I0907 00:04:54.615508   26504 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0907 00:04:54.615511   26504 command_runner.go:130] > # 	"KILL",
	I0907 00:04:54.615515   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615522   26504 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0907 00:04:54.615530   26504 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:04:54.615535   26504 command_runner.go:130] > # default_sysctls = [
	I0907 00:04:54.615542   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615549   26504 command_runner.go:130] > # List of devices on the host that a
	I0907 00:04:54.615562   26504 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0907 00:04:54.615568   26504 command_runner.go:130] > # allowed_devices = [
	I0907 00:04:54.615577   26504 command_runner.go:130] > # 	"/dev/fuse",
	I0907 00:04:54.615583   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615595   26504 command_runner.go:130] > # List of additional devices. specified as
	I0907 00:04:54.615610   26504 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0907 00:04:54.615620   26504 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0907 00:04:54.615644   26504 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:04:54.615655   26504 command_runner.go:130] > # additional_devices = [
	I0907 00:04:54.615683   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615695   26504 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0907 00:04:54.615701   26504 command_runner.go:130] > # cdi_spec_dirs = [
	I0907 00:04:54.615707   26504 command_runner.go:130] > # 	"/etc/cdi",
	I0907 00:04:54.615713   26504 command_runner.go:130] > # 	"/var/run/cdi",
	I0907 00:04:54.615721   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615731   26504 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0907 00:04:54.615749   26504 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0907 00:04:54.615758   26504 command_runner.go:130] > # Defaults to false.
	I0907 00:04:54.615766   26504 command_runner.go:130] > # device_ownership_from_security_context = false
	I0907 00:04:54.615778   26504 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0907 00:04:54.615792   26504 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0907 00:04:54.615798   26504 command_runner.go:130] > # hooks_dir = [
	I0907 00:04:54.615811   26504 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0907 00:04:54.615816   26504 command_runner.go:130] > # ]
	I0907 00:04:54.615825   26504 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0907 00:04:54.615832   26504 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0907 00:04:54.615840   26504 command_runner.go:130] > # its default mounts from the following two files:
	I0907 00:04:54.615844   26504 command_runner.go:130] > #
	I0907 00:04:54.615850   26504 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0907 00:04:54.615857   26504 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0907 00:04:54.615862   26504 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0907 00:04:54.615869   26504 command_runner.go:130] > #
	I0907 00:04:54.615879   26504 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0907 00:04:54.615892   26504 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0907 00:04:54.615905   26504 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0907 00:04:54.615914   26504 command_runner.go:130] > #      only add mounts it finds in this file.
	I0907 00:04:54.615922   26504 command_runner.go:130] > #
	I0907 00:04:54.615930   26504 command_runner.go:130] > # default_mounts_file = ""
	I0907 00:04:54.615942   26504 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0907 00:04:54.615957   26504 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0907 00:04:54.615963   26504 command_runner.go:130] > pids_limit = 1024
	I0907 00:04:54.615971   26504 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0907 00:04:54.615982   26504 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0907 00:04:54.615988   26504 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0907 00:04:54.615999   26504 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0907 00:04:54.616008   26504 command_runner.go:130] > # log_size_max = -1
	I0907 00:04:54.616023   26504 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0907 00:04:54.616032   26504 command_runner.go:130] > # log_to_journald = false
	I0907 00:04:54.616043   26504 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0907 00:04:54.616072   26504 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0907 00:04:54.616080   26504 command_runner.go:130] > # Path to directory for container attach sockets.
	I0907 00:04:54.616085   26504 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0907 00:04:54.616093   26504 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0907 00:04:54.616097   26504 command_runner.go:130] > # bind_mount_prefix = ""
	I0907 00:04:54.616106   26504 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0907 00:04:54.616115   26504 command_runner.go:130] > # read_only = false
	I0907 00:04:54.616126   26504 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0907 00:04:54.616140   26504 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0907 00:04:54.616150   26504 command_runner.go:130] > # live configuration reload.
	I0907 00:04:54.616159   26504 command_runner.go:130] > # log_level = "info"
	I0907 00:04:54.616177   26504 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0907 00:04:54.616189   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:04:54.616198   26504 command_runner.go:130] > # log_filter = ""
	I0907 00:04:54.616209   26504 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0907 00:04:54.616222   26504 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0907 00:04:54.616232   26504 command_runner.go:130] > # separated by comma.
	I0907 00:04:54.616242   26504 command_runner.go:130] > # uid_mappings = ""
	I0907 00:04:54.616255   26504 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0907 00:04:54.616269   26504 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0907 00:04:54.616279   26504 command_runner.go:130] > # separated by comma.
	I0907 00:04:54.616286   26504 command_runner.go:130] > # gid_mappings = ""
	I0907 00:04:54.616300   26504 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0907 00:04:54.616315   26504 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:04:54.616328   26504 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:04:54.616338   26504 command_runner.go:130] > # minimum_mappable_uid = -1
	I0907 00:04:54.616349   26504 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0907 00:04:54.616362   26504 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:04:54.616376   26504 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:04:54.616403   26504 command_runner.go:130] > # minimum_mappable_gid = -1
	I0907 00:04:54.616412   26504 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0907 00:04:54.616421   26504 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0907 00:04:54.616434   26504 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0907 00:04:54.616441   26504 command_runner.go:130] > # ctr_stop_timeout = 30
	I0907 00:04:54.616454   26504 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0907 00:04:54.616466   26504 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0907 00:04:54.616476   26504 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0907 00:04:54.616488   26504 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0907 00:04:54.616499   26504 command_runner.go:130] > drop_infra_ctr = false
	I0907 00:04:54.616506   26504 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0907 00:04:54.616515   26504 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0907 00:04:54.616522   26504 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0907 00:04:54.616532   26504 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0907 00:04:54.616547   26504 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0907 00:04:54.616559   26504 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0907 00:04:54.616570   26504 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0907 00:04:54.616582   26504 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0907 00:04:54.616592   26504 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0907 00:04:54.616606   26504 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0907 00:04:54.616620   26504 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0907 00:04:54.616634   26504 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0907 00:04:54.616644   26504 command_runner.go:130] > # default_runtime = "runc"
	I0907 00:04:54.616656   26504 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0907 00:04:54.616671   26504 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0907 00:04:54.616685   26504 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0907 00:04:54.616697   26504 command_runner.go:130] > # creation as a file is not desired either.
	I0907 00:04:54.616715   26504 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0907 00:04:54.616727   26504 command_runner.go:130] > # the hostname is being managed dynamically.
	I0907 00:04:54.616736   26504 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0907 00:04:54.616763   26504 command_runner.go:130] > # ]
	I0907 00:04:54.616776   26504 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0907 00:04:54.616789   26504 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0907 00:04:54.616803   26504 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0907 00:04:54.616816   26504 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0907 00:04:54.616824   26504 command_runner.go:130] > #
	I0907 00:04:54.616832   26504 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0907 00:04:54.616843   26504 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0907 00:04:54.616850   26504 command_runner.go:130] > #  runtime_type = "oci"
	I0907 00:04:54.616861   26504 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0907 00:04:54.616872   26504 command_runner.go:130] > #  privileged_without_host_devices = false
	I0907 00:04:54.616879   26504 command_runner.go:130] > #  allowed_annotations = []
	I0907 00:04:54.616885   26504 command_runner.go:130] > # Where:
	I0907 00:04:54.616894   26504 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0907 00:04:54.616905   26504 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0907 00:04:54.616918   26504 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0907 00:04:54.616932   26504 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0907 00:04:54.616941   26504 command_runner.go:130] > #   in $PATH.
	I0907 00:04:54.616950   26504 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0907 00:04:54.616962   26504 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0907 00:04:54.616973   26504 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0907 00:04:54.616982   26504 command_runner.go:130] > #   state.
	I0907 00:04:54.616993   26504 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0907 00:04:54.617007   26504 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0907 00:04:54.617018   26504 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0907 00:04:54.617030   26504 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0907 00:04:54.617044   26504 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0907 00:04:54.617057   26504 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0907 00:04:54.617066   26504 command_runner.go:130] > #   The currently recognized values are:
	I0907 00:04:54.617076   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0907 00:04:54.617092   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0907 00:04:54.617107   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0907 00:04:54.617120   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0907 00:04:54.617136   26504 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0907 00:04:54.617148   26504 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0907 00:04:54.617157   26504 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0907 00:04:54.617169   26504 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0907 00:04:54.617181   26504 command_runner.go:130] > #   should be moved to the container's cgroup
	I0907 00:04:54.617189   26504 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0907 00:04:54.617199   26504 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0907 00:04:54.617206   26504 command_runner.go:130] > runtime_type = "oci"
	I0907 00:04:54.617216   26504 command_runner.go:130] > runtime_root = "/run/runc"
	I0907 00:04:54.617224   26504 command_runner.go:130] > runtime_config_path = ""
	I0907 00:04:54.617234   26504 command_runner.go:130] > monitor_path = ""
	I0907 00:04:54.617241   26504 command_runner.go:130] > monitor_cgroup = ""
	I0907 00:04:54.617250   26504 command_runner.go:130] > monitor_exec_cgroup = ""
	I0907 00:04:54.617261   26504 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0907 00:04:54.617271   26504 command_runner.go:130] > # running containers
	I0907 00:04:54.617279   26504 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0907 00:04:54.617295   26504 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0907 00:04:54.617325   26504 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0907 00:04:54.617338   26504 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0907 00:04:54.617346   26504 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0907 00:04:54.617351   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0907 00:04:54.617361   26504 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0907 00:04:54.617371   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0907 00:04:54.617379   26504 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0907 00:04:54.617389   26504 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0907 00:04:54.617398   26504 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0907 00:04:54.617410   26504 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0907 00:04:54.617424   26504 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0907 00:04:54.617440   26504 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0907 00:04:54.617456   26504 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0907 00:04:54.617468   26504 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0907 00:04:54.617482   26504 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0907 00:04:54.617499   26504 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0907 00:04:54.617507   26504 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0907 00:04:54.617515   26504 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0907 00:04:54.617521   26504 command_runner.go:130] > # Example:
	I0907 00:04:54.617602   26504 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0907 00:04:54.617617   26504 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0907 00:04:54.617626   26504 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0907 00:04:54.617635   26504 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0907 00:04:54.617642   26504 command_runner.go:130] > # cpuset = 0
	I0907 00:04:54.617649   26504 command_runner.go:130] > # cpushares = "0-1"
	I0907 00:04:54.617655   26504 command_runner.go:130] > # Where:
	I0907 00:04:54.617667   26504 command_runner.go:130] > # The workload name is workload-type.
	I0907 00:04:54.617680   26504 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0907 00:04:54.617692   26504 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0907 00:04:54.617704   26504 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0907 00:04:54.617721   26504 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0907 00:04:54.617734   26504 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0907 00:04:54.617740   26504 command_runner.go:130] > # 
	I0907 00:04:54.617754   26504 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0907 00:04:54.617762   26504 command_runner.go:130] > #
	I0907 00:04:54.617772   26504 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0907 00:04:54.617782   26504 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0907 00:04:54.617792   26504 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0907 00:04:54.617807   26504 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0907 00:04:54.617820   26504 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0907 00:04:54.617829   26504 command_runner.go:130] > [crio.image]
	I0907 00:04:54.617840   26504 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0907 00:04:54.617850   26504 command_runner.go:130] > # default_transport = "docker://"
	I0907 00:04:54.617863   26504 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0907 00:04:54.617872   26504 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:04:54.617879   26504 command_runner.go:130] > # global_auth_file = ""
	I0907 00:04:54.617893   26504 command_runner.go:130] > # The image used to instantiate infra containers.
	I0907 00:04:54.617905   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:04:54.617916   26504 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0907 00:04:54.617928   26504 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0907 00:04:54.617940   26504 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:04:54.617948   26504 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:04:54.617958   26504 command_runner.go:130] > # pause_image_auth_file = ""
	I0907 00:04:54.617970   26504 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0907 00:04:54.617984   26504 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0907 00:04:54.617997   26504 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0907 00:04:54.618009   26504 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0907 00:04:54.618017   26504 command_runner.go:130] > # pause_command = "/pause"
	I0907 00:04:54.618026   26504 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0907 00:04:54.618045   26504 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0907 00:04:54.618055   26504 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0907 00:04:54.618068   26504 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0907 00:04:54.618086   26504 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0907 00:04:54.618095   26504 command_runner.go:130] > # signature_policy = ""
	I0907 00:04:54.618107   26504 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0907 00:04:54.618118   26504 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0907 00:04:54.618128   26504 command_runner.go:130] > # changing them here.
	I0907 00:04:54.618138   26504 command_runner.go:130] > # insecure_registries = [
	I0907 00:04:54.618143   26504 command_runner.go:130] > # ]
	I0907 00:04:54.618151   26504 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0907 00:04:54.618161   26504 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0907 00:04:54.618170   26504 command_runner.go:130] > # image_volumes = "mkdir"
	I0907 00:04:54.618181   26504 command_runner.go:130] > # Temporary directory to use for storing big files
	I0907 00:04:54.618192   26504 command_runner.go:130] > # big_files_temporary_dir = ""
	I0907 00:04:54.618205   26504 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0907 00:04:54.618215   26504 command_runner.go:130] > # CNI plugins.
	I0907 00:04:54.618224   26504 command_runner.go:130] > [crio.network]
	I0907 00:04:54.618237   26504 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0907 00:04:54.618249   26504 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0907 00:04:54.618259   26504 command_runner.go:130] > # cni_default_network = ""
	I0907 00:04:54.618272   26504 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0907 00:04:54.618281   26504 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0907 00:04:54.618289   26504 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0907 00:04:54.618295   26504 command_runner.go:130] > # plugin_dirs = [
	I0907 00:04:54.618299   26504 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0907 00:04:54.618305   26504 command_runner.go:130] > # ]
	I0907 00:04:54.618311   26504 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0907 00:04:54.618317   26504 command_runner.go:130] > [crio.metrics]
	I0907 00:04:54.618322   26504 command_runner.go:130] > # Globally enable or disable metrics support.
	I0907 00:04:54.618330   26504 command_runner.go:130] > enable_metrics = true
	I0907 00:04:54.618335   26504 command_runner.go:130] > # Specify enabled metrics collectors.
	I0907 00:04:54.618342   26504 command_runner.go:130] > # Per default all metrics are enabled.
	I0907 00:04:54.618349   26504 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0907 00:04:54.618357   26504 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0907 00:04:54.618365   26504 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0907 00:04:54.618371   26504 command_runner.go:130] > # metrics_collectors = [
	I0907 00:04:54.618376   26504 command_runner.go:130] > # 	"operations",
	I0907 00:04:54.618383   26504 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0907 00:04:54.618387   26504 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0907 00:04:54.618393   26504 command_runner.go:130] > # 	"operations_errors",
	I0907 00:04:54.618398   26504 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0907 00:04:54.618404   26504 command_runner.go:130] > # 	"image_pulls_by_name",
	I0907 00:04:54.618432   26504 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0907 00:04:54.618439   26504 command_runner.go:130] > # 	"image_pulls_failures",
	I0907 00:04:54.618443   26504 command_runner.go:130] > # 	"image_pulls_successes",
	I0907 00:04:54.618448   26504 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0907 00:04:54.618452   26504 command_runner.go:130] > # 	"image_layer_reuse",
	I0907 00:04:54.618456   26504 command_runner.go:130] > # 	"containers_oom_total",
	I0907 00:04:54.618460   26504 command_runner.go:130] > # 	"containers_oom",
	I0907 00:04:54.618463   26504 command_runner.go:130] > # 	"processes_defunct",
	I0907 00:04:54.618467   26504 command_runner.go:130] > # 	"operations_total",
	I0907 00:04:54.618471   26504 command_runner.go:130] > # 	"operations_latency_seconds",
	I0907 00:04:54.618475   26504 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0907 00:04:54.618479   26504 command_runner.go:130] > # 	"operations_errors_total",
	I0907 00:04:54.618487   26504 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0907 00:04:54.618496   26504 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0907 00:04:54.618502   26504 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0907 00:04:54.618507   26504 command_runner.go:130] > # 	"image_pulls_success_total",
	I0907 00:04:54.618511   26504 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0907 00:04:54.618516   26504 command_runner.go:130] > # 	"containers_oom_count_total",
	I0907 00:04:54.618522   26504 command_runner.go:130] > # ]
	I0907 00:04:54.618528   26504 command_runner.go:130] > # The port on which the metrics server will listen.
	I0907 00:04:54.618534   26504 command_runner.go:130] > # metrics_port = 9090
	I0907 00:04:54.618540   26504 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0907 00:04:54.618554   26504 command_runner.go:130] > # metrics_socket = ""
	I0907 00:04:54.618559   26504 command_runner.go:130] > # The certificate for the secure metrics server.
	I0907 00:04:54.618565   26504 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0907 00:04:54.618571   26504 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0907 00:04:54.618575   26504 command_runner.go:130] > # certificate on any modification event.
	I0907 00:04:54.618579   26504 command_runner.go:130] > # metrics_cert = ""
	I0907 00:04:54.618584   26504 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0907 00:04:54.618589   26504 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0907 00:04:54.618593   26504 command_runner.go:130] > # metrics_key = ""
	I0907 00:04:54.618598   26504 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0907 00:04:54.618605   26504 command_runner.go:130] > [crio.tracing]
	I0907 00:04:54.618610   26504 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0907 00:04:54.618615   26504 command_runner.go:130] > # enable_tracing = false
	I0907 00:04:54.618620   26504 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0907 00:04:54.618624   26504 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0907 00:04:54.618632   26504 command_runner.go:130] > # Number of samples to collect per million spans.
	I0907 00:04:54.618636   26504 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0907 00:04:54.618642   26504 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0907 00:04:54.618646   26504 command_runner.go:130] > [crio.stats]
	I0907 00:04:54.618652   26504 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0907 00:04:54.618660   26504 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0907 00:04:54.618665   26504 command_runner.go:130] > # stats_collection_period = 0
	I0907 00:04:54.619618   26504 command_runner.go:130] ! time="2023-09-07 00:04:54.599741442Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0907 00:04:54.619644   26504 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0907 00:04:54.619739   26504 cni.go:84] Creating CNI manager for ""
	I0907 00:04:54.619757   26504 cni.go:136] 2 nodes found, recommending kindnet
	I0907 00:04:54.619767   26504 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:04:54.619785   26504 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-816061 NodeName:multinode-816061-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:04:54.619889   26504 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-816061-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:04:54.619932   26504 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-816061-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:04:54.619984   26504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:04:54.629621   26504 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	I0907 00:04:54.629666   26504 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	
	Initiating transfer...
	I0907 00:04:54.629720   26504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
	I0907 00:04:54.639139   26504 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
	I0907 00:04:54.639165   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
	I0907 00:04:54.639216   26504 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubeadm
	I0907 00:04:54.639234   26504 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
	I0907 00:04:54.639256   26504 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubelet
	I0907 00:04:54.647060   26504 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0907 00:04:54.647097   26504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0907 00:04:54.647120   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
	I0907 00:04:59.954975   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0907 00:04:59.955053   26504 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0907 00:04:59.960597   26504 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0907 00:04:59.960643   26504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0907 00:04:59.960670   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
	I0907 00:05:06.612406   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:05:06.625608   26504 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
	I0907 00:05:06.625705   26504 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
	I0907 00:05:06.629773   26504 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0907 00:05:06.629877   26504 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0907 00:05:06.629906   26504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
	I0907 00:05:07.166034   26504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0907 00:05:07.177512   26504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0907 00:05:07.195106   26504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:05:07.212586   26504 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0907 00:05:07.216511   26504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:05:07.229375   26504 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:05:07.229658   26504 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:05:07.229685   26504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:05:07.229715   26504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:05:07.244752   26504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0907 00:05:07.245169   26504 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:05:07.245663   26504 main.go:141] libmachine: Using API Version  1
	I0907 00:05:07.245686   26504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:05:07.246048   26504 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:05:07.246233   26504 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:05:07.246387   26504 start.go:301] JoinCluster: &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:05:07.246478   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0907 00:05:07.246499   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:05:07.248997   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:05:07.249415   26504 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:05:07.249439   26504 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:05:07.249612   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:05:07.249780   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:05:07.249946   26504 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:05:07.250090   26504 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:05:07.427538   26504 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zjfcsl.wip3e3hjykfpjcuo --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:05:07.427648   26504 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:05:07.427689   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zjfcsl.wip3e3hjykfpjcuo --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-816061-m02"
	I0907 00:05:07.472824   26504 command_runner.go:130] > [preflight] Running pre-flight checks
	I0907 00:05:07.620736   26504 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0907 00:05:07.620771   26504 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0907 00:05:07.663748   26504 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:05:07.663780   26504 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:05:07.663789   26504 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0907 00:05:07.786497   26504 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0907 00:05:09.805942   26504 command_runner.go:130] > This node has joined the cluster:
	I0907 00:05:09.805966   26504 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0907 00:05:09.805972   26504 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0907 00:05:09.805979   26504 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0907 00:05:09.808298   26504 command_runner.go:130] ! W0907 00:05:07.467771     824 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0907 00:05:09.808324   26504 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:05:09.808347   26504 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zjfcsl.wip3e3hjykfpjcuo --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-816061-m02": (2.380640053s)
	I0907 00:05:09.808369   26504 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0907 00:05:10.068555   26504 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0907 00:05:10.068604   26504 start.go:303] JoinCluster complete in 2.822219921s
	I0907 00:05:10.068615   26504 cni.go:84] Creating CNI manager for ""
	I0907 00:05:10.068619   26504 cni.go:136] 2 nodes found, recommending kindnet
	I0907 00:05:10.068663   26504 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:05:10.074325   26504 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0907 00:05:10.074346   26504 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0907 00:05:10.074355   26504 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0907 00:05:10.074365   26504 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:05:10.074375   26504 command_runner.go:130] > Access: 2023-09-07 00:03:32.020590168 +0000
	I0907 00:05:10.074383   26504 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0907 00:05:10.074391   26504 command_runner.go:130] > Change: 2023-09-07 00:03:30.122590168 +0000
	I0907 00:05:10.074401   26504 command_runner.go:130] >  Birth: -
	I0907 00:05:10.074628   26504 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 00:05:10.074650   26504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 00:05:10.094302   26504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:05:10.429415   26504 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:05:10.437145   26504 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:05:10.442016   26504 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0907 00:05:10.460611   26504 command_runner.go:130] > daemonset.apps/kindnet configured
	I0907 00:05:10.463272   26504 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:05:10.463498   26504 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:05:10.463812   26504 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:05:10.463826   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:10.463837   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:10.463847   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:10.466180   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:10.466194   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:10.466200   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:10.466206   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:10.466211   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:10.466218   26504 round_trippers.go:580]     Content-Length: 291
	I0907 00:05:10.466226   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:10 GMT
	I0907 00:05:10.466235   26504 round_trippers.go:580]     Audit-Id: 8fe3218d-c4cd-45fc-b788-74c041c9e3e6
	I0907 00:05:10.466252   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:10.466271   26504 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"450","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0907 00:05:10.466358   26504 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-816061" context rescaled to 1 replicas
	I0907 00:05:10.466386   26504 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:05:10.468458   26504 out.go:177] * Verifying Kubernetes components...
	I0907 00:05:10.470141   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:05:10.484380   26504 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:05:10.484638   26504 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:05:10.484930   26504 node_ready.go:35] waiting up to 6m0s for node "multinode-816061-m02" to be "Ready" ...
	I0907 00:05:10.485008   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:10.485019   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:10.485031   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:10.485042   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:10.487768   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:10.487790   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:10.487801   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:10 GMT
	I0907 00:05:10.487808   26504 round_trippers.go:580]     Audit-Id: 65c5b9e3-7f1e-417b-a8b0-23a01842477a
	I0907 00:05:10.487814   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:10.487820   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:10.487825   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:10.487831   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:10.487838   26504 round_trippers.go:580]     Content-Length: 3530
	I0907 00:05:10.487909   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"506","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0907 00:05:10.488149   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:10.488159   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:10.488166   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:10.488172   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:10.490453   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:10.490470   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:10.490479   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:10 GMT
	I0907 00:05:10.490488   26504 round_trippers.go:580]     Audit-Id: 1cac8052-50a2-4505-b7f7-6d035cec3097
	I0907 00:05:10.490499   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:10.490511   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:10.490524   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:10.490542   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:10.490557   26504 round_trippers.go:580]     Content-Length: 3530
	I0907 00:05:10.490633   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"506","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0907 00:05:10.991223   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:10.991248   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:10.991260   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:10.991271   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:10.994075   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:10.994102   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:10.994113   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:10.994126   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:10.994135   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:10.994143   26504 round_trippers.go:580]     Content-Length: 3530
	I0907 00:05:10.994154   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:10 GMT
	I0907 00:05:10.994160   26504 round_trippers.go:580]     Audit-Id: 9b8d5316-61c5-4c09-8255-963f5a5ee995
	I0907 00:05:10.994168   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:10.994246   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"506","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0907 00:05:11.491809   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:11.491830   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:11.491839   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:11.491845   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:11.494980   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:11.495007   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:11.495017   26504 round_trippers.go:580]     Audit-Id: 9947b899-230e-451a-8b02-4d05cda26eb6
	I0907 00:05:11.495025   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:11.495034   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:11.495046   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:11.495055   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:11.495065   26504 round_trippers.go:580]     Content-Length: 3530
	I0907 00:05:11.495079   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:11 GMT
	I0907 00:05:11.495125   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"506","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0907 00:05:11.991779   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:11.991802   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:11.991811   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:11.991817   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:11.995795   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:11.995820   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:11.995828   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:11.995834   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:11 GMT
	I0907 00:05:11.995841   26504 round_trippers.go:580]     Audit-Id: 66280710-c048-4b12-9d89-40c250cc265c
	I0907 00:05:11.995850   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:11.995867   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:11.995879   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:11.995892   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:11.995979   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:12.491471   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:12.491499   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:12.491512   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:12.491521   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:12.495277   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:12.495300   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:12.495311   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:12.495320   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:12.495330   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:12.495342   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:12.495349   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:12.495358   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:12 GMT
	I0907 00:05:12.495369   26504 round_trippers.go:580]     Audit-Id: 5a8a1450-c45f-4f79-a7f1-1ba346a86a7f
	I0907 00:05:12.495455   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:12.495707   26504 node_ready.go:58] node "multinode-816061-m02" has status "Ready":"False"
	I0907 00:05:12.992078   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:12.992102   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:12.992110   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:12.992116   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:12.996569   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:05:12.996596   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:12.996607   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:12.996615   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:12.996622   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:12.996631   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:12.996640   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:12 GMT
	I0907 00:05:12.996649   26504 round_trippers.go:580]     Audit-Id: 2f061356-551f-4539-aba7-35305c44db28
	I0907 00:05:12.996661   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:12.996831   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:13.492006   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:13.492030   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:13.492044   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:13.492052   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:13.495613   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:13.495637   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:13.495647   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:13.495656   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:13.495666   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:13.495675   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:13.495690   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:13.495697   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:13 GMT
	I0907 00:05:13.495706   26504 round_trippers.go:580]     Audit-Id: b85b7b09-8a54-4387-bb67-43c89092654e
	I0907 00:05:13.495798   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:13.992012   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:13.992032   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:13.992040   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:13.992046   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:13.995734   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:13.995755   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:13.995762   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:13.995768   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:13.995773   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:13 GMT
	I0907 00:05:13.995779   26504 round_trippers.go:580]     Audit-Id: e74916ea-885a-41db-b91c-30baa1d8a281
	I0907 00:05:13.995784   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:13.995789   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:13.995795   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:13.995937   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:14.491678   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:14.491702   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:14.491710   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:14.491716   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:14.496855   26504 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:05:14.496878   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:14.496885   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:14.496890   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:14.496896   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:14.496901   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:14.496907   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:14.496912   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:14 GMT
	I0907 00:05:14.496917   26504 round_trippers.go:580]     Audit-Id: 50dffe72-c09e-4e39-a431-2f2ed4537a87
	I0907 00:05:14.497143   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:14.497360   26504 node_ready.go:58] node "multinode-816061-m02" has status "Ready":"False"
	I0907 00:05:14.991863   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:14.991898   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:14.991906   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:14.991912   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:14.994993   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:14.995018   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:14.995027   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:14.995035   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:14.995042   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:14.995050   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:14.995058   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:14.995065   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:14 GMT
	I0907 00:05:14.995072   26504 round_trippers.go:580]     Audit-Id: 8f0344dd-14b1-4400-9238-c998b4b13bd0
	I0907 00:05:14.995170   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:15.491851   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:15.491873   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:15.491881   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:15.491888   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:15.494449   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:15.494471   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:15.494479   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:15.494485   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:15.494491   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:15 GMT
	I0907 00:05:15.494497   26504 round_trippers.go:580]     Audit-Id: 9c19d4af-4ea0-443b-b346-b3bddbc8bfc3
	I0907 00:05:15.494502   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:15.494508   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:15.494513   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:15.494583   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:15.991113   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:15.991135   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:15.991147   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:15.991153   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:15.993517   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:15.993539   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:15.993550   26504 round_trippers.go:580]     Audit-Id: a8b7781b-dcf1-4ea5-9358-6c8c08bd6216
	I0907 00:05:15.993560   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:15.993566   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:15.993571   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:15.993580   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:15.993585   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:15.993592   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:15 GMT
	I0907 00:05:15.993670   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:16.492007   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:16.492033   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:16.492042   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:16.492049   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:16.494677   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:16.494700   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:16.494711   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:16.494721   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:16.494734   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:16.494746   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:16.494755   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:16.494765   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:16 GMT
	I0907 00:05:16.494773   26504 round_trippers.go:580]     Audit-Id: fbd8825b-0537-47fa-a1d4-00d7d22d7193
	I0907 00:05:16.494862   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:16.992019   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:16.992040   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:16.992048   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:16.992057   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:16.995001   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:16.995027   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:16.995037   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:16.995045   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:16.995054   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:16.995062   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:16.995071   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:16 GMT
	I0907 00:05:16.995080   26504 round_trippers.go:580]     Audit-Id: 198b2e0e-9098-4864-b7b3-8ab0e44c3c06
	I0907 00:05:16.995091   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:16.995174   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:16.995437   26504 node_ready.go:58] node "multinode-816061-m02" has status "Ready":"False"
	I0907 00:05:17.491403   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:17.491429   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:17.491441   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:17.491450   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:17.495477   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:05:17.495502   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:17.495513   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:17.495522   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:17.495530   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:17.495539   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:17.495546   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:17.495554   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:17 GMT
	I0907 00:05:17.495564   26504 round_trippers.go:580]     Audit-Id: aa9dc3a5-08ff-43a8-963b-8fc3cdc6082f
	I0907 00:05:17.495776   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:17.991396   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:17.991429   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:17.991438   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:17.991445   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:17.994900   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:17.994930   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:17.994941   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:17.994949   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:17 GMT
	I0907 00:05:17.994957   26504 round_trippers.go:580]     Audit-Id: 4a4040f0-0356-436d-ae73-aa26ac261d74
	I0907 00:05:17.994965   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:17.994979   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:17.994991   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:17.995004   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:17.995104   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:18.491627   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:18.491647   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:18.491659   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:18.491670   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:18.495139   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:18.495159   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:18.495168   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:18.495174   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:18.495183   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:18.495189   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:18.495194   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:18 GMT
	I0907 00:05:18.495203   26504 round_trippers.go:580]     Audit-Id: 098279b2-87ea-4aef-a209-974535a900c3
	I0907 00:05:18.495209   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:18.495268   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:18.991425   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:18.991470   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:18.991478   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:18.991484   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:18.995141   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:18.995159   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:18.995172   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:18.995180   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:18.995191   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:18.995199   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:18.995211   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:18.995219   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:18 GMT
	I0907 00:05:18.995230   26504 round_trippers.go:580]     Audit-Id: 4aa437df-6861-4acb-a5f6-3c7299f7874a
	I0907 00:05:18.995369   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:18.995717   26504 node_ready.go:58] node "multinode-816061-m02" has status "Ready":"False"
	I0907 00:05:19.492038   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:19.492066   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:19.492078   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:19.492089   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:19.495701   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:19.495722   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:19.495731   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:19.495737   26504 round_trippers.go:580]     Content-Length: 3639
	I0907 00:05:19.495747   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:19 GMT
	I0907 00:05:19.495753   26504 round_trippers.go:580]     Audit-Id: 45267c66-ae1a-42ba-9e9b-5af58bd426e0
	I0907 00:05:19.495762   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:19.495772   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:19.495781   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:19.495851   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"513","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0907 00:05:19.991405   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:19.991426   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:19.991438   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:19.991446   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:19.996938   26504 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:05:19.996960   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:19.996967   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:19.996974   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:19.996979   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:19.996984   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:19.996990   26504 round_trippers.go:580]     Content-Length: 3908
	I0907 00:05:19.996996   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:19 GMT
	I0907 00:05:19.997008   26504 round_trippers.go:580]     Audit-Id: 2b9e25f7-6ef4-46c7-add6-399550874254
	I0907 00:05:19.997079   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"533","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2884 chars]
	I0907 00:05:20.491636   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:20.491659   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.491670   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.491679   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.494924   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:20.494947   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.494959   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.494968   26504 round_trippers.go:580]     Content-Length: 3725
	I0907 00:05:20.494973   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.494979   26504 round_trippers.go:580]     Audit-Id: 4d0db1dc-7091-4545-b4b9-ea91ecdd6e6e
	I0907 00:05:20.494985   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.494994   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.495000   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.495036   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"537","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0907 00:05:20.495250   26504 node_ready.go:49] node "multinode-816061-m02" has status "Ready":"True"
	I0907 00:05:20.495261   26504 node_ready.go:38] duration metric: took 10.01031653s waiting for node "multinode-816061-m02" to be "Ready" ...
	I0907 00:05:20.495268   26504 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:05:20.495316   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:05:20.495322   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.495329   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.495336   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.499649   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:05:20.499666   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.499672   26504 round_trippers.go:580]     Audit-Id: 96ac90d8-9bf9-4453-ab87-22b079a1cff4
	I0907 00:05:20.499681   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.499689   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.499697   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.499706   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.499714   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.501363   26504 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"537"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"446","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67366 chars]
	I0907 00:05:20.503411   26504 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.503481   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:05:20.503492   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.503500   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.503506   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.507960   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:05:20.507979   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.507989   26504 round_trippers.go:580]     Audit-Id: 3b0888f0-1844-4abb-8b27-0926c25a0fd6
	I0907 00:05:20.508001   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.508011   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.508020   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.508033   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.508042   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.508231   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"446","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0907 00:05:20.508627   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:20.508640   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.508650   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.508660   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.511864   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:20.511884   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.511894   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.511903   26504 round_trippers.go:580]     Audit-Id: 56023424-1d9c-41b9-ae95-31cca61a9510
	I0907 00:05:20.511911   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.511924   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.511933   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.511943   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.512264   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:20.512556   26504 pod_ready.go:92] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:20.512569   26504 pod_ready.go:81] duration metric: took 9.138741ms waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.512576   26504 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.512628   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:05:20.512637   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.512644   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.512650   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.514789   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:20.514807   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.514816   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.514825   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.514832   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.514838   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.514844   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.514850   26504 round_trippers.go:580]     Audit-Id: 81b26301-d7da-41da-bc66-6a240011a275
	I0907 00:05:20.515081   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"434","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0907 00:05:20.515434   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:20.515445   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.515452   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.515459   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.517122   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:05:20.517135   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.517144   26504 round_trippers.go:580]     Audit-Id: 551a1e90-c199-481f-bb15-d96b650cbc39
	I0907 00:05:20.517157   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.517171   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.517180   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.517189   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.517195   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.517302   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:20.517586   26504 pod_ready.go:92] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:20.517598   26504 pod_ready.go:81] duration metric: took 5.0178ms waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.517612   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.517667   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:05:20.517677   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.517686   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.517697   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.519749   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:20.519761   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.519767   26504 round_trippers.go:580]     Audit-Id: 351f1f00-6c0c-4f6b-be01-36de2df723f2
	I0907 00:05:20.519773   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.519778   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.519788   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.519794   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.519803   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.519975   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"435","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0907 00:05:20.520326   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:20.520339   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.520346   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.520352   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.522180   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:05:20.522191   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.522196   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.522202   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.522207   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.522213   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.522221   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.522227   26504 round_trippers.go:580]     Audit-Id: 5bfad502-92c6-4474-8d3b-84022e2a94eb
	I0907 00:05:20.522397   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:20.522754   26504 pod_ready.go:92] pod "kube-apiserver-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:20.522768   26504 pod_ready.go:81] duration metric: took 5.145063ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.522790   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.522849   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:05:20.522859   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.522867   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.522877   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.524694   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:05:20.524712   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.524720   26504 round_trippers.go:580]     Audit-Id: bd9bea2a-e5c6-43ea-93bc-885b0d6bd896
	I0907 00:05:20.524729   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.524747   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.524756   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.524769   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.524780   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.524909   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"433","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0907 00:05:20.525236   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:20.525246   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.525253   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.525260   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.526940   26504 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:05:20.526956   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.526965   26504 round_trippers.go:580]     Audit-Id: 01b5a616-3147-4525-a3d7-c5e9e3edd084
	I0907 00:05:20.526974   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.526984   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.526994   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.527006   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.527015   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.527164   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:20.527472   26504 pod_ready.go:92] pod "kube-controller-manager-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:20.527487   26504 pod_ready.go:81] duration metric: took 4.685823ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.527498   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.691775   26504 request.go:629] Waited for 164.229166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:05:20.691832   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:05:20.691836   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.691844   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.691850   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.696029   26504 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:05:20.696046   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.696053   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.696059   26504 round_trippers.go:580]     Audit-Id: d1e2ce8e-bef2-4794-8e64-68c1258e582a
	I0907 00:05:20.696065   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.696075   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.696086   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.696096   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.696248   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wswp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d99412b-fc2d-4fce-a7e2-80da3e220e07","resourceVersion":"522","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0907 00:05:20.892007   26504 request.go:629] Waited for 195.380755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:20.892083   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:05:20.892089   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:20.892170   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:20.892182   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:20.895847   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:20.895865   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:20.895872   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:20.895878   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:20.895883   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:20.895891   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:20.895900   26504 round_trippers.go:580]     Content-Length: 3725
	I0907 00:05:20.895909   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:20 GMT
	I0907 00:05:20.895919   26504 round_trippers.go:580]     Audit-Id: be9969a7-d8d0-436a-82e8-ac006bf759f5
	I0907 00:05:20.896004   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"537","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0907 00:05:20.896310   26504 pod_ready.go:92] pod "kube-proxy-2wswp" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:20.896328   26504 pod_ready.go:81] duration metric: took 368.819885ms waiting for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:20.896340   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:21.091711   26504 request.go:629] Waited for 195.301471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:05:21.091760   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:05:21.091765   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:21.091773   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:21.091779   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:21.095277   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:21.095298   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:21.095312   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:21 GMT
	I0907 00:05:21.095325   26504 round_trippers.go:580]     Audit-Id: 59a1f880-9ee1-44ef-a88a-d2d7dddfbc52
	I0907 00:05:21.095335   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:21.095343   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:21.095351   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:21.095364   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:21.095761   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"414","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:05:21.292517   26504 request.go:629] Waited for 196.37325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:21.292596   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:21.292603   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:21.292615   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:21.292623   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:21.295983   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:21.296006   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:21.296016   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:21.296024   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:21.296034   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:21.296047   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:21.296060   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:21 GMT
	I0907 00:05:21.296073   26504 round_trippers.go:580]     Audit-Id: 7127e0ff-f8e2-4756-866c-f716e690f44a
	I0907 00:05:21.296290   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:21.296689   26504 pod_ready.go:92] pod "kube-proxy-tbzlv" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:21.296705   26504 pod_ready.go:81] duration metric: took 400.357696ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:21.296716   26504 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:21.492157   26504 request.go:629] Waited for 195.366046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:05:21.492216   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:05:21.492221   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:21.492232   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:21.492242   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:21.495970   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:21.495992   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:21.496000   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:21.496011   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:21.496020   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:21.496029   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:21.496039   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:21 GMT
	I0907 00:05:21.496047   26504 round_trippers.go:580]     Audit-Id: 54d762f6-91f2-465c-b8fe-355dfd04f9a2
	I0907 00:05:21.496538   26504 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"432","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0907 00:05:21.692200   26504 request.go:629] Waited for 195.332478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:21.692274   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:05:21.692282   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:21.692299   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:21.692310   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:21.695203   26504 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:05:21.695219   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:21.695229   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:21 GMT
	I0907 00:05:21.695237   26504 round_trippers.go:580]     Audit-Id: dc4328e9-3099-46d7-bda0-7c1607e80be6
	I0907 00:05:21.695246   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:21.695258   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:21.695268   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:21.695282   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:21.695700   26504 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0907 00:05:21.695995   26504 pod_ready.go:92] pod "kube-scheduler-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:05:21.696008   26504 pod_ready.go:81] duration metric: took 399.28061ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:05:21.696017   26504 pod_ready.go:38] duration metric: took 1.200741951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:05:21.696037   26504 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:05:21.696076   26504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:05:21.709343   26504 system_svc.go:56] duration metric: took 13.299637ms WaitForService to wait for kubelet.
	I0907 00:05:21.709367   26504 kubeadm.go:581] duration metric: took 11.242951741s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:05:21.709389   26504 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:05:21.891755   26504 request.go:629] Waited for 182.296568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I0907 00:05:21.891810   26504 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:05:21.891815   26504 round_trippers.go:469] Request Headers:
	I0907 00:05:21.891823   26504 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:05:21.891829   26504 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:05:21.895342   26504 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:05:21.895366   26504 round_trippers.go:577] Response Headers:
	I0907 00:05:21.895379   26504 round_trippers.go:580]     Audit-Id: 37ad24f1-41c1-4cf5-8281-aedfafb0cbe5
	I0907 00:05:21.895387   26504 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:05:21.895395   26504 round_trippers.go:580]     Content-Type: application/json
	I0907 00:05:21.895403   26504 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:05:21.895411   26504 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:05:21.895428   26504 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:05:21 GMT
	I0907 00:05:21.895911   26504 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"539"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"425","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9525 chars]
	I0907 00:05:21.896299   26504 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:05:21.896314   26504 node_conditions.go:123] node cpu capacity is 2
	I0907 00:05:21.896321   26504 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:05:21.896325   26504 node_conditions.go:123] node cpu capacity is 2
	I0907 00:05:21.896329   26504 node_conditions.go:105] duration metric: took 186.935863ms to run NodePressure ...
	I0907 00:05:21.896338   26504 start.go:228] waiting for startup goroutines ...
	I0907 00:05:21.896363   26504 start.go:242] writing updated cluster config ...
	I0907 00:05:21.896636   26504 ssh_runner.go:195] Run: rm -f paused
	I0907 00:05:21.944371   26504 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:05:21.946754   26504 out.go:177] * Done! kubectl is now configured to use "multinode-816061" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:03:30 UTC, ends at Thu 2023-09-07 00:05:30 UTC. --
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.334343234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dcf859bc-dfc3-4f8b-b0e7-848e6d1060cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.334753749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dcf859bc-dfc3-4f8b-b0e7-848e6d1060cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.366760924Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=236ebce5-a18d-43b3-b935-204617336430 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.367178873Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-zvzjl,Uid:346dd02e-d6b2-481f-837e-45b618a3fd04,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045123056172165,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:05:22.719483192Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8ktxh,Uid:c2574ba0-f19a-40c1-a06f-601bb17661f6,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1694045064026312343,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:04:23.678499463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045064021363758,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-07T00:04:23.672528749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tbzlv,Uid:6b9717d8-174b-4713-a941-382c81cc659e,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1694045059034763589,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc659e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:04:17.801443555Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&PodSandboxMetadata{Name:kindnet-xgbtc,Uid:137c032b-12d1-4179-8416-0f3cc5733842,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045058995110072,Labels:map[string]string{app: kindnet,controller-revision-hash: 77b9cf4878,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137c032b-12d1-4179-8416-0f3cc5733842,k8s-app: kindnet,pod-template-gener
ation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:04:17.760983799Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-816061,Uid:17d9280f4f521ce2f8119c5c317f1d67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045035775237040,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.212:8443,kubernetes.io/config.hash: 17d9280f4f521ce2f8119c5c317f1d67,kubernetes.io/config.seen: 2023-09-07T00:03:55.224336085Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01b70850393b6d362f405d8840568746
93870fbef5017f88959c42b3d969fa03,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-816061,Uid:ac3fb26098ffac0d0e40ebb845f9b9fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045035756161223,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ac3fb26098ffac0d0e40ebb845f9b9fe,kubernetes.io/config.seen: 2023-09-07T00:03:55.224333958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&PodSandboxMetadata{Name:etcd-multinode-816061,Uid:98883a05b83cf4cdfaf6946888d8cb74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045035721445512,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.212:2379,kubernetes.io/config.hash: 98883a05b83cf4cdfaf6946888d8cb74,kubernetes.io/config.seen: 2023-09-07T00:03:55.224335097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-816061,Uid:45d88e9a1c94ef1043c5c8795b51d51f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694045035717069174,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: 45d88e9a1c94ef1043c5c8795b51d51f,kubernetes.io/config.seen: 2023-09-07T00:03:55.224330385Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=236ebce5-a18d-43b3-b935-204617336430 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.368435635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=53dee18f-53ea-48d7-8810-8513dc30a51b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.368793330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=53dee18f-53ea-48d7-8810-8513dc30a51b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.369195909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=53dee18f-53ea-48d7-8810-8513dc30a51b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.386104841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=930820cf-7615-42ff-9769-c81f6ce58239 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.386191394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=930820cf-7615-42ff-9769-c81f6ce58239 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.386489919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=930820cf-7615-42ff-9769-c81f6ce58239 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.429029760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c028d06b-161b-44c6-8537-99e215fd1024 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.429154586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c028d06b-161b-44c6-8537-99e215fd1024 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.429505560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c028d06b-161b-44c6-8537-99e215fd1024 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.469246634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aca4818a-5921-426d-82f6-70584cc6ab77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.469372422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aca4818a-5921-426d-82f6-70584cc6ab77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.469840728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aca4818a-5921-426d-82f6-70584cc6ab77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.515442122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4d1983e-533c-417b-b489-b2c4c194659d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.515532716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4d1983e-533c-417b-b489-b2c4c194659d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.515864747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4d1983e-533c-417b-b489-b2c4c194659d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.551868957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=301be797-9c45-4436-a3bd-e5cd9c032c88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.551956327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=301be797-9c45-4436-a3bd-e5cd9c032c88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.552148939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=301be797-9c45-4436-a3bd-e5cd9c032c88 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.588760871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f9114e3-8700-4146-a045-166506eaccbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.588858855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f9114e3-8700-4146-a045-166506eaccbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:05:30 multinode-816061 crio[715]: time="2023-09-07 00:05:30.589077307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19a8687d7ab396f5e8d61ef6ac43f1c56738bfa2670610c6abc51795db4ee56f,PodSandboxId:29ab3574131725f563085f3db6a84e6a91f556c0c8c3eb4ae35c015262c61366,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045126244516503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a,PodSandboxId:942ad367d094c2ffbfb8e070821c225f7a77e0ebb9bda3fc39c6dc961d7d0b67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045064832341038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d96bc357123892652af6e67308fcbdfc1d8ae621ef9367ab046a9d6bb3120d,PodSandboxId:9f901d92765ff37899629e691b75acde8d420b8208c91d4fc720de4b206de731,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045064591203626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263,PodSandboxId:509cbca08c4f4fb7892ff502115c2748dd7617188681e5c087a020e14ec28027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045061967342505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d,PodSandboxId:2b9940ce3f94d32733ab4e9269c61a73a27a6f77392f8aab4c78041438321a1b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045059460848932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8,PodSandboxId:44a7d55c32d2354a4a6d9d5002a1860b922897ee659583d96ecab9ee15e99519,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045036924815797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8a4344,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f,PodSandboxId:01b70850393b6d362f405d884056874693870fbef5017f88959c42b3d969fa03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045036689102265,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotations:map[string]string{io.kubernetes.container.hash:
61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f,PodSandboxId:9953b67af54a29927b03a2bd001d4ef86dbcfc70564a9ba7a6214a969fcfaece,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045036389841271,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.ku
bernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac,PodSandboxId:81ab1759a28c0108697152d98f3342bd3c21a01275aa8f94e4d8c6279e3a03aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045036254882666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 19eff46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f9114e3-8700-4146-a045-166506eaccbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	19a8687d7ab39       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   29ab357413172
	9639814d13702       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   942ad367d094c
	a0d96bc357123       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   9f901d92765ff
	92a60feb4ccbb       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      About a minute ago   Running             kindnet-cni               0                   509cbca08c4f4
	cfc3eecef5381       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      About a minute ago   Running             kube-proxy                0                   2b9940ce3f94d
	2d14bb9a7c403       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   44a7d55c32d23
	37b13ebde6f59       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   01b70850393b6
	66f38db3e7405       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      About a minute ago   Running             kube-controller-manager   0                   9953b67af54a2
	02e80e012439d       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   81ab1759a28c0
	
	* 
	* ==> coredns [9639814d1370273e90f53ed73a8e8ea32bd543ba54339a6d6d156754a488562a] <==
	* [INFO] 10.244.1.2:44813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197884s
	[INFO] 10.244.0.3:38946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101392s
	[INFO] 10.244.0.3:55989 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001750028s
	[INFO] 10.244.0.3:45385 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122617s
	[INFO] 10.244.0.3:44883 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077053s
	[INFO] 10.244.0.3:57819 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001264602s
	[INFO] 10.244.0.3:49898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078986s
	[INFO] 10.244.0.3:48472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110633s
	[INFO] 10.244.0.3:59811 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124332s
	[INFO] 10.244.1.2:53416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157412s
	[INFO] 10.244.1.2:41202 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115928s
	[INFO] 10.244.1.2:36393 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099784s
	[INFO] 10.244.1.2:38393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126105s
	[INFO] 10.244.0.3:55244 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134975s
	[INFO] 10.244.0.3:36566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086794s
	[INFO] 10.244.0.3:48502 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075746s
	[INFO] 10.244.0.3:46127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093021s
	[INFO] 10.244.1.2:48553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130879s
	[INFO] 10.244.1.2:56891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232356s
	[INFO] 10.244.1.2:57350 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115857s
	[INFO] 10.244.1.2:44957 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000149691s
	[INFO] 10.244.0.3:52708 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081575s
	[INFO] 10.244.0.3:52089 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000043702s
	[INFO] 10.244.0.3:49050 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100693s
	[INFO] 10.244.0.3:50510 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000035276s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-816061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-816061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=multinode-816061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_04_05_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:04:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-816061
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:05:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:04:23 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:04:23 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:04:23 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:04:23 +0000   Thu, 07 Sep 2023 00:04:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-816061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 73622b4a66c04eabb97791231e099de8
	  System UUID:                73622b4a-66c0-4eab-b977-91231e099de8
	  Boot ID:                    7667b948-cb5a-47ed-b373-fc8ec2a2748a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zvzjl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-8ktxh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     73s
	  kube-system                 etcd-multinode-816061                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kindnet-xgbtc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      73s
	  kube-system                 kube-apiserver-multinode-816061             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-multinode-816061    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-tbzlv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-scheduler-multinode-816061             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node multinode-816061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node multinode-816061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 95s)  kubelet          Node multinode-816061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node multinode-816061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node multinode-816061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node multinode-816061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           74s                node-controller  Node multinode-816061 event: Registered Node multinode-816061 in Controller
	  Normal  NodeReady                67s                kubelet          Node multinode-816061 status is now: NodeReady
	
	
	Name:               multinode-816061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-816061-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:05:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-816061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:05:20 +0000   Thu, 07 Sep 2023 00:05:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:05:20 +0000   Thu, 07 Sep 2023 00:05:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:05:20 +0000   Thu, 07 Sep 2023 00:05:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:05:20 +0000   Thu, 07 Sep 2023 00:05:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-816061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 81777933e8a54565a8bde95c976c63f7
	  System UUID:                81777933-e8a5-4565-a8bd-e95c976c63f7
	  Boot ID:                    b2a11f07-fbb6-42b6-8be4-e19a5c1ebaed
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-mq552    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-gdck2               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-2wswp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x5 over 22s)  kubelet          Node multinode-816061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 22s)  kubelet          Node multinode-816061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 22s)  kubelet          Node multinode-816061-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19s                node-controller  Node multinode-816061-m02 event: Registered Node multinode-816061-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-816061-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074542] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.327272] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.454194] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152964] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.017136] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.210336] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.103902] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.143984] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.101174] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.233270] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.305403] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[Sep 7 00:04] systemd-fstab-generator[1257]: Ignoring "noauto" for root device
	[ +21.542672] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [2d14bb9a7c4034c00c7862ef92d8facc88ce6720aaf09e1573268d68a15138c8] <==
	* {"level":"info","ts":"2023-09-07T00:03:58.698004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-07T00:03:58.698095Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-09-07T00:03:58.698391Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-09-07T00:03:58.701251Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-07T00:03:58.701401Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-07T00:03:59.651328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-07T00:03:59.651431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-07T00:03:59.651467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgPreVoteResp from eed9c28654b6490f at term 1"}
	{"level":"info","ts":"2023-09-07T00:03:59.651497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:03:59.651522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgVoteResp from eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-09-07T00:03:59.651549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became leader at term 2"}
	{"level":"info","ts":"2023-09-07T00:03:59.651575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eed9c28654b6490f elected leader eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-09-07T00:03:59.653499Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eed9c28654b6490f","local-member-attributes":"{Name:multinode-816061 ClientURLs:[https://192.168.39.212:2379]}","request-path":"/0/members/eed9c28654b6490f/attributes","cluster-id":"f8d3b95e5bbb719c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:03:59.653574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:03:59.653772Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:03:59.654782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:03:59.65495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:03:59.654989Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:03:59.655088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:03:59.655171Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:03:59.656103Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.212:2379"}
	{"level":"info","ts":"2023-09-07T00:03:59.654955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:03:59.665848Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:05:27.084171Z","caller":"traceutil/trace.go:171","msg":"trace[1647862553] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"320.153547ms","start":"2023-09-07T00:05:26.763988Z","end":"2023-09-07T00:05:27.084142Z","steps":["trace[1647862553] 'process raft request'  (duration: 226.729562ms)","trace[1647862553] 'compare'  (duration: 92.851122ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T00:05:27.084906Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:05:26.763968Z","time spent":"320.404486ms","remote":"127.0.0.1:33458","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3005,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/default/busybox\" mod_revision:556 > success:<request_put:<key:\"/registry/deployments/default/busybox\" value_size:2960 >> failure:<request_range:<key:\"/registry/deployments/default/busybox\" > >"}
	
	* 
	* ==> kernel <==
	*  00:05:30 up 2 min,  0 users,  load average: 0.78, 0.40, 0.15
	Linux multinode-816061 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [92a60feb4ccbb2c637e704cbc472b9dfa502db856dd7b49a88ab0c4cb3323263] <==
	* I0907 00:04:22.727509       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0907 00:04:22.727791       1 main.go:107] hostIP = 192.168.39.212
	podIP = 192.168.39.212
	I0907 00:04:22.727923       1 main.go:116] setting mtu 1500 for CNI 
	I0907 00:04:22.727951       1 main.go:146] kindnetd IP family: "ipv4"
	I0907 00:04:22.727981       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0907 00:04:23.327293       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:04:23.327412       1 main.go:227] handling current node
	I0907 00:04:33.341493       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:04:33.341550       1 main.go:227] handling current node
	I0907 00:04:43.353582       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:04:43.353735       1 main.go:227] handling current node
	I0907 00:04:53.367026       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:04:53.367077       1 main.go:227] handling current node
	I0907 00:05:03.371952       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:05:03.371997       1 main.go:227] handling current node
	I0907 00:05:13.385335       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:05:13.385397       1 main.go:227] handling current node
	I0907 00:05:13.385413       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:05:13.385420       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	I0907 00:05:13.385938       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.44 Flags: [] Table: 0} 
	I0907 00:05:23.394018       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:05:23.394090       1 main.go:227] handling current node
	I0907 00:05:23.394113       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:05:23.394119       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac] <==
	* I0907 00:04:01.099312       1 controller.go:624] quota admission added evaluator for: namespaces
	I0907 00:04:01.104327       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0907 00:04:01.109036       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0907 00:04:01.109183       1 aggregator.go:166] initial CRD sync complete...
	I0907 00:04:01.109288       1 autoregister_controller.go:141] Starting autoregister controller
	I0907 00:04:01.109311       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0907 00:04:01.109334       1 cache.go:39] Caches are synced for autoregister controller
	I0907 00:04:01.136920       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0907 00:04:01.162276       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0907 00:04:01.162373       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0907 00:04:01.972575       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0907 00:04:01.982868       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0907 00:04:01.982912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0907 00:04:02.590026       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0907 00:04:02.644260       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0907 00:04:02.795014       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0907 00:04:02.816061       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I0907 00:04:02.819407       1 controller.go:624] quota admission added evaluator for: endpoints
	I0907 00:04:02.830116       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0907 00:04:03.047164       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0907 00:04:04.140568       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0907 00:04:04.156369       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0907 00:04:04.166837       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0907 00:04:16.567731       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0907 00:04:17.604307       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [66f38db3e74050705b33be5e1bc49db9beb77bdef514039c7998c67a96e3707f] <==
	* I0907 00:04:18.111227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="204.871259ms"
	I0907 00:04:18.113244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.036µs"
	I0907 00:04:23.688555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.59µs"
	I0907 00:04:23.720230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58µs"
	I0907 00:04:25.521839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.453937ms"
	I0907 00:04:25.522228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.854µs"
	I0907 00:04:26.713371       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0907 00:05:09.709203       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-816061-m02\" does not exist"
	I0907 00:05:09.722687       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-816061-m02" podCIDRs=["10.244.1.0/24"]
	I0907 00:05:09.736073       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gdck2"
	I0907 00:05:09.736120       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2wswp"
	I0907 00:05:11.721226       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-816061-m02"
	I0907 00:05:11.721383       1 event.go:307] "Event occurred" object="multinode-816061-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-816061-m02 event: Registered Node multinode-816061-m02 in Controller"
	I0907 00:05:20.196854       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:05:22.654880       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0907 00:05:22.680321       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-mq552"
	I0907 00:05:22.690370       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zvzjl"
	I0907 00:05:22.713795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.717021ms"
	I0907 00:05:22.739763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.890961ms"
	I0907 00:05:22.764053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.214848ms"
	I0907 00:05:22.764195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.953µs"
	I0907 00:05:26.758080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="27.786991ms"
	I0907 00:05:26.758358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.063µs"
	I0907 00:05:27.340575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.574946ms"
	I0907 00:05:27.340732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.809µs"
	
	* 
	* ==> kube-proxy [cfc3eecef538134bcc139efebf2ad409e3292d5c002e39b17667f4e0cb52f64d] <==
	* I0907 00:04:19.662948       1 server_others.go:69] "Using iptables proxy"
	I0907 00:04:19.677442       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I0907 00:04:19.721960       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:04:19.722032       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:04:19.724402       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:04:19.724469       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:04:19.724722       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:04:19.724757       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:04:19.725805       1 config.go:188] "Starting service config controller"
	I0907 00:04:19.725850       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:04:19.725868       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:04:19.725872       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:04:19.726364       1 config.go:315] "Starting node config controller"
	I0907 00:04:19.726371       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:04:19.826700       1 shared_informer.go:318] Caches are synced for node config
	I0907 00:04:19.826785       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:04:19.826807       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [37b13ebde6f5982ff5698b90420ddbc8680eb79a3d89c97508248e54a726d10f] <==
	* W0907 00:04:01.131490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:04:01.131517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0907 00:04:01.131569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:04:01.131676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0907 00:04:01.131750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0907 00:04:01.131786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0907 00:04:01.131842       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0907 00:04:01.131869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0907 00:04:01.131936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0907 00:04:01.131965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0907 00:04:01.132066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:04:01.132095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0907 00:04:01.132150       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:04:01.132178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0907 00:04:01.132236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:04:01.135580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0907 00:04:01.964578       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0907 00:04:01.964836       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:04:02.042511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:04:02.042679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0907 00:04:02.106338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0907 00:04:02.106433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0907 00:04:02.389992       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0907 00:04:02.390043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0907 00:04:05.007968       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:03:30 UTC, ends at Thu 2023-09-07 00:05:31 UTC. --
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876445    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/137c032b-12d1-4179-8416-0f3cc5733842-cni-cfg\") pod \"kindnet-xgbtc\" (UID: \"137c032b-12d1-4179-8416-0f3cc5733842\") " pod="kube-system/kindnet-xgbtc"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876517    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b9717d8-174b-4713-a941-382c81cc659e-xtables-lock\") pod \"kube-proxy-tbzlv\" (UID: \"6b9717d8-174b-4713-a941-382c81cc659e\") " pod="kube-system/kube-proxy-tbzlv"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876538    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b9717d8-174b-4713-a941-382c81cc659e-lib-modules\") pod \"kube-proxy-tbzlv\" (UID: \"6b9717d8-174b-4713-a941-382c81cc659e\") " pod="kube-system/kube-proxy-tbzlv"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876568    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrl9\" (UniqueName: \"kubernetes.io/projected/6b9717d8-174b-4713-a941-382c81cc659e-kube-api-access-sdrl9\") pod \"kube-proxy-tbzlv\" (UID: \"6b9717d8-174b-4713-a941-382c81cc659e\") " pod="kube-system/kube-proxy-tbzlv"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876588    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/137c032b-12d1-4179-8416-0f3cc5733842-lib-modules\") pod \"kindnet-xgbtc\" (UID: \"137c032b-12d1-4179-8416-0f3cc5733842\") " pod="kube-system/kindnet-xgbtc"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876683    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b9717d8-174b-4713-a941-382c81cc659e-kube-proxy\") pod \"kube-proxy-tbzlv\" (UID: \"6b9717d8-174b-4713-a941-382c81cc659e\") " pod="kube-system/kube-proxy-tbzlv"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876707    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/137c032b-12d1-4179-8416-0f3cc5733842-xtables-lock\") pod \"kindnet-xgbtc\" (UID: \"137c032b-12d1-4179-8416-0f3cc5733842\") " pod="kube-system/kindnet-xgbtc"
	Sep 07 00:04:17 multinode-816061 kubelet[1264]: I0907 00:04:17.876725    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bl5\" (UniqueName: \"kubernetes.io/projected/137c032b-12d1-4179-8416-0f3cc5733842-kube-api-access-66bl5\") pod \"kindnet-xgbtc\" (UID: \"137c032b-12d1-4179-8416-0f3cc5733842\") " pod="kube-system/kindnet-xgbtc"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.479947    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tbzlv" podStartSLOduration=6.47989395 podCreationTimestamp="2023-09-07 00:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-07 00:04:20.464338894 +0000 UTC m=+16.347100983" watchObservedRunningTime="2023-09-07 00:04:23.47989395 +0000 UTC m=+19.362656039"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.632183    1264 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.672731    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xgbtc" podStartSLOduration=6.672693039 podCreationTimestamp="2023-09-07 00:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-07 00:04:23.481113084 +0000 UTC m=+19.363875172" watchObservedRunningTime="2023-09-07 00:04:23.672693039 +0000 UTC m=+19.555455128"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.672907    1264 topology_manager.go:215] "Topology Admit Handler" podUID="3ce467f7-aaa1-4391-9bc9-39ef0521ebd2" podNamespace="kube-system" podName="storage-provisioner"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.678568    1264 topology_manager.go:215] "Topology Admit Handler" podUID="c2574ba0-f19a-40c1-a06f-601bb17661f6" podNamespace="kube-system" podName="coredns-5dd5756b68-8ktxh"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.716661    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3ce467f7-aaa1-4391-9bc9-39ef0521ebd2-tmp\") pod \"storage-provisioner\" (UID: \"3ce467f7-aaa1-4391-9bc9-39ef0521ebd2\") " pod="kube-system/storage-provisioner"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.716743    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2574ba0-f19a-40c1-a06f-601bb17661f6-config-volume\") pod \"coredns-5dd5756b68-8ktxh\" (UID: \"c2574ba0-f19a-40c1-a06f-601bb17661f6\") " pod="kube-system/coredns-5dd5756b68-8ktxh"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.716776    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4c2c\" (UniqueName: \"kubernetes.io/projected/3ce467f7-aaa1-4391-9bc9-39ef0521ebd2-kube-api-access-b4c2c\") pod \"storage-provisioner\" (UID: \"3ce467f7-aaa1-4391-9bc9-39ef0521ebd2\") " pod="kube-system/storage-provisioner"
	Sep 07 00:04:23 multinode-816061 kubelet[1264]: I0907 00:04:23.716794    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfc9k\" (UniqueName: \"kubernetes.io/projected/c2574ba0-f19a-40c1-a06f-601bb17661f6-kube-api-access-rfc9k\") pod \"coredns-5dd5756b68-8ktxh\" (UID: \"c2574ba0-f19a-40c1-a06f-601bb17661f6\") " pod="kube-system/coredns-5dd5756b68-8ktxh"
	Sep 07 00:04:25 multinode-816061 kubelet[1264]: I0907 00:04:25.507077    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8ktxh" podStartSLOduration=8.507042261 podCreationTimestamp="2023-09-07 00:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-07 00:04:25.50683397 +0000 UTC m=+21.389596059" watchObservedRunningTime="2023-09-07 00:04:25.507042261 +0000 UTC m=+21.389804345"
	Sep 07 00:04:25 multinode-816061 kubelet[1264]: I0907 00:04:25.507381    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.507360543 podCreationTimestamp="2023-09-07 00:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-07 00:04:25.489991125 +0000 UTC m=+21.372753215" watchObservedRunningTime="2023-09-07 00:04:25.507360543 +0000 UTC m=+21.390122632"
	Sep 07 00:05:04 multinode-816061 kubelet[1264]: E0907 00:05:04.415589    1264 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 00:05:04 multinode-816061 kubelet[1264]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 00:05:04 multinode-816061 kubelet[1264]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 00:05:04 multinode-816061 kubelet[1264]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 00:05:22 multinode-816061 kubelet[1264]: I0907 00:05:22.720007    1264 topology_manager.go:215] "Topology Admit Handler" podUID="346dd02e-d6b2-481f-837e-45b618a3fd04" podNamespace="default" podName="busybox-5bc68d56bd-zvzjl"
	Sep 07 00:05:22 multinode-816061 kubelet[1264]: I0907 00:05:22.791299    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f45l\" (UniqueName: \"kubernetes.io/projected/346dd02e-d6b2-481f-837e-45b618a3fd04-kube-api-access-2f45l\") pod \"busybox-5bc68d56bd-zvzjl\" (UID: \"346dd02e-d6b2-481f-837e-45b618a3fd04\") " pod="default/busybox-5bc68d56bd-zvzjl"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-816061 -n multinode-816061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-816061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (691.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-816061
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-816061
E0907 00:07:40.637914   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:09:02.117328   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-816061: exit status 82 (2m1.08453155s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-816061"  ...
	* Stopping node "multinode-816061"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-816061" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-816061 --wait=true -v=8 --alsologtostderr
E0907 00:11:17.593294   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:11:24.847353   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:12:47.893878   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:14:02.117476   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:15:25.162471   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:16:17.592733   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:16:24.847023   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-816061 --wait=true -v=8 --alsologtostderr: (9m26.990607838s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-816061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-816061 -n multinode-816061
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-816061 logs -n 25: (1.575669649s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3647011183/001/cp-test_multinode-816061-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061:/home/docker/cp-test_multinode-816061-m02_multinode-816061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n multinode-816061 sudo cat                                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /home/docker/cp-test_multinode-816061-m02_multinode-816061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03:/home/docker/cp-test_multinode-816061-m02_multinode-816061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n multinode-816061-m03 sudo cat                                   | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /home/docker/cp-test_multinode-816061-m02_multinode-816061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp testdata/cp-test.txt                                                | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3647011183/001/cp-test_multinode-816061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061:/home/docker/cp-test_multinode-816061-m03_multinode-816061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n multinode-816061 sudo cat                                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /home/docker/cp-test_multinode-816061-m03_multinode-816061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt                       | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m02:/home/docker/cp-test_multinode-816061-m03_multinode-816061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n                                                                 | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | multinode-816061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-816061 ssh -n multinode-816061-m02 sudo cat                                   | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	|         | /home/docker/cp-test_multinode-816061-m03_multinode-816061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-816061 node stop m03                                                          | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:06 UTC |
	| node    | multinode-816061 node start                                                             | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:06 UTC | 07 Sep 23 00:07 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-816061                                                                | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:07 UTC |                     |
	| stop    | -p multinode-816061                                                                     | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:07 UTC |                     |
	| start   | -p multinode-816061                                                                     | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:09 UTC | 07 Sep 23 00:18 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-816061                                                                | multinode-816061 | jenkins | v1.31.2 | 07 Sep 23 00:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:09:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:09:02.215795   29917 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:09:02.215908   29917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:02.215915   29917 out.go:309] Setting ErrFile to fd 2...
	I0907 00:09:02.215920   29917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:02.216116   29917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:09:02.216667   29917 out.go:303] Setting JSON to false
	I0907 00:09:02.217519   29917 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3087,"bootTime":1694042256,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:09:02.217582   29917 start.go:138] virtualization: kvm guest
	I0907 00:09:02.220021   29917 out.go:177] * [multinode-816061] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:09:02.221697   29917 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:09:02.221737   29917 notify.go:220] Checking for updates...
	I0907 00:09:02.223094   29917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:09:02.224411   29917 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:09:02.225729   29917 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:09:02.227039   29917 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:09:02.229018   29917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:09:02.230905   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:09:02.230981   29917 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:09:02.231361   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:09:02.231394   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:09:02.245607   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0907 00:09:02.245943   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:09:02.246440   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:09:02.246458   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:09:02.246815   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:09:02.246976   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:09:02.282199   29917 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:09:02.283697   29917 start.go:298] selected driver: kvm2
	I0907 00:09:02.283713   29917 start.go:902] validating driver "kvm2" against &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:09:02.283841   29917 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:09:02.284143   29917 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:09:02.284218   29917 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:09:02.299474   29917 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:09:02.300108   29917 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:09:02.300149   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:09:02.300155   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:09:02.300166   29917 start_flags.go:321] config:
	{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:1m0s}
	I0907 00:09:02.300424   29917 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:09:02.303135   29917 out.go:177] * Starting control plane node multinode-816061 in cluster multinode-816061
	I0907 00:09:02.304614   29917 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:09:02.304654   29917 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:09:02.304667   29917 cache.go:57] Caching tarball of preloaded images
	I0907 00:09:02.304738   29917 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:09:02.304749   29917 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:09:02.304895   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:09:02.305097   29917 start.go:365] acquiring machines lock for multinode-816061: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:09:02.305154   29917 start.go:369] acquired machines lock for "multinode-816061" in 38.56µs
	I0907 00:09:02.305173   29917 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:09:02.305187   29917 fix.go:54] fixHost starting: 
	I0907 00:09:02.305469   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:09:02.305510   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:09:02.319128   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0907 00:09:02.319514   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:09:02.320020   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:09:02.320045   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:09:02.320348   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:09:02.320515   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:09:02.320645   29917 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:09:02.322068   29917 fix.go:102] recreateIfNeeded on multinode-816061: state=Running err=<nil>
	W0907 00:09:02.322083   29917 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:09:02.324083   29917 out.go:177] * Updating the running kvm2 "multinode-816061" VM ...
	I0907 00:09:02.325360   29917 machine.go:88] provisioning docker machine ...
	I0907 00:09:02.325374   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:09:02.325567   29917 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:09:02.325743   29917 buildroot.go:166] provisioning hostname "multinode-816061"
	I0907 00:09:02.325759   29917 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:09:02.325875   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:09:02.327961   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:09:02.328315   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:09:02.328348   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:09:02.328436   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:09:02.328609   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:09:02.328762   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:09:02.328884   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:09:02.329049   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:09:02.329468   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:09:02.329484   29917 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061 && echo "multinode-816061" | sudo tee /etc/hostname
	I0907 00:09:20.711040   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:26.791112   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:29.863041   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:35.943073   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:39.015031   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:45.095067   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:48.167079   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:54.247106   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:09:57.319055   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:03.399069   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:06.471073   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:12.551042   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:15.623031   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:21.703106   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:24.775059   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:30.855112   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:33.927047   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:40.007062   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:43.079062   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:49.159113   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:52.230995   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:10:58.311090   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:01.383045   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:07.463047   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:10.535074   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:16.615060   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:19.687017   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:25.767044   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:28.839008   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:34.919007   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:37.991064   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:44.071080   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:47.143025   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:53.223052   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:11:56.295062   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:02.375035   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:05.447050   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:11.527083   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:14.599039   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:20.679039   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:23.751049   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:29.831122   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:32.903077   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:38.983062   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:42.055074   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:48.135078   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:51.207044   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:12:57.287010   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:00.363112   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:06.439051   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:09.511152   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:15.591045   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:18.663066   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:24.743070   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:27.815042   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:33.895050   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:36.967056   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:43.047056   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:46.119058   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:52.199016   29917 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I0907 00:13:55.201307   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:13:55.201364   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:13:55.203459   29917 machine.go:91] provisioned docker machine in 4m52.87808308s
	I0907 00:13:55.203494   29917 fix.go:56] fixHost completed within 4m52.89831307s
	I0907 00:13:55.203499   29917 start.go:83] releasing machines lock for "multinode-816061", held for 4m52.898333774s
	W0907 00:13:55.203515   29917 start.go:672] error starting host: provision: host is not running
	W0907 00:13:55.203588   29917 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:13:55.203597   29917 start.go:687] Will try again in 5 seconds ...
	I0907 00:14:00.206522   29917 start.go:365] acquiring machines lock for multinode-816061: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:14:00.206623   29917 start.go:369] acquired machines lock for "multinode-816061" in 59.991µs
	I0907 00:14:00.206651   29917 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:14:00.206667   29917 fix.go:54] fixHost starting: 
	I0907 00:14:00.206996   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:14:00.207021   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:14:00.221328   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41553
	I0907 00:14:00.221785   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:14:00.222304   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:14:00.222324   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:14:00.222651   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:14:00.222828   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:00.222969   29917 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:14:00.224608   29917 fix.go:102] recreateIfNeeded on multinode-816061: state=Stopped err=<nil>
	I0907 00:14:00.224622   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	W0907 00:14:00.224792   29917 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:14:00.226841   29917 out.go:177] * Restarting existing kvm2 VM for "multinode-816061" ...
	I0907 00:14:00.228525   29917 main.go:141] libmachine: (multinode-816061) Calling .Start
	I0907 00:14:00.228706   29917 main.go:141] libmachine: (multinode-816061) Ensuring networks are active...
	I0907 00:14:00.229449   29917 main.go:141] libmachine: (multinode-816061) Ensuring network default is active
	I0907 00:14:00.229789   29917 main.go:141] libmachine: (multinode-816061) Ensuring network mk-multinode-816061 is active
	I0907 00:14:00.230173   29917 main.go:141] libmachine: (multinode-816061) Getting domain xml...
	I0907 00:14:00.230910   29917 main.go:141] libmachine: (multinode-816061) Creating domain...
	I0907 00:14:01.463750   29917 main.go:141] libmachine: (multinode-816061) Waiting to get IP...
	I0907 00:14:01.464643   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:01.465135   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:01.465224   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:01.465136   30700 retry.go:31] will retry after 293.000791ms: waiting for machine to come up
	I0907 00:14:01.759731   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:01.760214   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:01.760243   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:01.760167   30700 retry.go:31] will retry after 343.658382ms: waiting for machine to come up
	I0907 00:14:02.105771   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:02.106186   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:02.106218   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:02.106172   30700 retry.go:31] will retry after 430.962597ms: waiting for machine to come up
	I0907 00:14:02.538831   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:02.539240   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:02.539266   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:02.539192   30700 retry.go:31] will retry after 581.300417ms: waiting for machine to come up
	I0907 00:14:03.121844   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:03.122361   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:03.122391   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:03.122309   30700 retry.go:31] will retry after 561.068784ms: waiting for machine to come up
	I0907 00:14:03.684957   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:03.685392   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:03.685422   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:03.685375   30700 retry.go:31] will retry after 933.245291ms: waiting for machine to come up
	I0907 00:14:04.619732   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:04.620056   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:04.620097   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:04.620009   30700 retry.go:31] will retry after 1.129974875s: waiting for machine to come up
	I0907 00:14:05.751438   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:05.751939   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:05.751971   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:05.751878   30700 retry.go:31] will retry after 948.160284ms: waiting for machine to come up
	I0907 00:14:06.702076   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:06.702519   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:06.702550   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:06.702469   30700 retry.go:31] will retry after 1.362396454s: waiting for machine to come up
	I0907 00:14:08.067003   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:08.067453   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:08.067471   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:08.067409   30700 retry.go:31] will retry after 2.327625629s: waiting for machine to come up
	I0907 00:14:10.396269   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:10.396692   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:10.396722   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:10.396636   30700 retry.go:31] will retry after 2.806588081s: waiting for machine to come up
	I0907 00:14:13.205043   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:13.205419   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:13.205470   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:13.205360   30700 retry.go:31] will retry after 2.732499911s: waiting for machine to come up
	I0907 00:14:15.939510   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:15.940112   29917 main.go:141] libmachine: (multinode-816061) DBG | unable to find current IP address of domain multinode-816061 in network mk-multinode-816061
	I0907 00:14:15.940144   29917 main.go:141] libmachine: (multinode-816061) DBG | I0907 00:14:15.940055   30700 retry.go:31] will retry after 4.268665567s: waiting for machine to come up
	I0907 00:14:20.211823   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.212320   29917 main.go:141] libmachine: (multinode-816061) Found IP for machine: 192.168.39.212
	I0907 00:14:20.212345   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has current primary IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.212352   29917 main.go:141] libmachine: (multinode-816061) Reserving static IP address...
	I0907 00:14:20.212804   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "multinode-816061", mac: "52:54:00:ef:52:c5", ip: "192.168.39.212"} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.212829   29917 main.go:141] libmachine: (multinode-816061) DBG | skip adding static IP to network mk-multinode-816061 - found existing host DHCP lease matching {name: "multinode-816061", mac: "52:54:00:ef:52:c5", ip: "192.168.39.212"}
	I0907 00:14:20.212850   29917 main.go:141] libmachine: (multinode-816061) Reserved static IP address: 192.168.39.212
	I0907 00:14:20.212865   29917 main.go:141] libmachine: (multinode-816061) Waiting for SSH to be available...
	I0907 00:14:20.212877   29917 main.go:141] libmachine: (multinode-816061) DBG | Getting to WaitForSSH function...
	I0907 00:14:20.215149   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.215484   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.215518   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.215624   29917 main.go:141] libmachine: (multinode-816061) DBG | Using SSH client type: external
	I0907 00:14:20.215646   29917 main.go:141] libmachine: (multinode-816061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa (-rw-------)
	I0907 00:14:20.215692   29917 main.go:141] libmachine: (multinode-816061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:14:20.215715   29917 main.go:141] libmachine: (multinode-816061) DBG | About to run SSH command:
	I0907 00:14:20.215729   29917 main.go:141] libmachine: (multinode-816061) DBG | exit 0
	I0907 00:14:20.306451   29917 main.go:141] libmachine: (multinode-816061) DBG | SSH cmd err, output: <nil>: 
	I0907 00:14:20.306849   29917 main.go:141] libmachine: (multinode-816061) Calling .GetConfigRaw
	I0907 00:14:20.307446   29917 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:14:20.310167   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.310551   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.310585   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.310910   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:14:20.311100   29917 machine.go:88] provisioning docker machine ...
	I0907 00:14:20.311122   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:20.311328   29917 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:14:20.311466   29917 buildroot.go:166] provisioning hostname "multinode-816061"
	I0907 00:14:20.311499   29917 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:14:20.311635   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:20.313822   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.314340   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.314364   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.314526   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:20.314692   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:20.314841   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:20.315017   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:20.315171   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:14:20.315549   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:14:20.315562   29917 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061 && echo "multinode-816061" | sudo tee /etc/hostname
	I0907 00:14:20.452905   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-816061
	
	I0907 00:14:20.452931   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:20.455604   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.455937   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.455967   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.456163   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:20.456335   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:20.456451   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:20.456585   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:20.456770   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:14:20.457255   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:14:20.457278   29917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-816061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-816061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-816061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:14:20.587949   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:14:20.587972   29917 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:14:20.587998   29917 buildroot.go:174] setting up certificates
	I0907 00:14:20.588007   29917 provision.go:83] configureAuth start
	I0907 00:14:20.588016   29917 main.go:141] libmachine: (multinode-816061) Calling .GetMachineName
	I0907 00:14:20.588289   29917 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:14:20.590646   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.591054   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.591085   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.591146   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:20.593395   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.593701   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.593721   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.593845   29917 provision.go:138] copyHostCerts
	I0907 00:14:20.593880   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:14:20.593906   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:14:20.593915   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:14:20.593977   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:14:20.594095   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:14:20.594117   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:14:20.594121   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:14:20.594145   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:14:20.594188   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:14:20.594203   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:14:20.594209   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:14:20.594227   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:14:20.594268   29917 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.multinode-816061 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube multinode-816061]
	I0907 00:14:20.865686   29917 provision.go:172] copyRemoteCerts
	I0907 00:14:20.865738   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:14:20.865760   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:20.868382   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.868722   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:20.868755   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:20.868921   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:20.869121   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:20.869264   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:20.869369   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:14:20.960505   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0907 00:14:20.960565   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0907 00:14:20.988554   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0907 00:14:20.988645   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:14:21.014674   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0907 00:14:21.014738   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:14:21.040408   29917 provision.go:86] duration metric: configureAuth took 452.389561ms
	I0907 00:14:21.040440   29917 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:14:21.040695   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:14:21.040808   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:21.043350   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.043761   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.043783   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.043943   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:21.044150   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.044320   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.044414   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:21.044568   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:14:21.044994   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:14:21.045010   29917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:14:21.363454   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:14:21.363479   29917 machine.go:91] provisioned docker machine in 1.052363606s
	I0907 00:14:21.363490   29917 start.go:300] post-start starting for "multinode-816061" (driver="kvm2")
	I0907 00:14:21.363502   29917 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:14:21.363522   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:21.363892   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:14:21.363920   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:21.366528   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.366959   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.366998   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.367158   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:21.367354   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.367574   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:21.367731   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:14:21.462504   29917 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:14:21.466495   29917 command_runner.go:130] > NAME=Buildroot
	I0907 00:14:21.466511   29917 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0907 00:14:21.466516   29917 command_runner.go:130] > ID=buildroot
	I0907 00:14:21.466521   29917 command_runner.go:130] > VERSION_ID=2021.02.12
	I0907 00:14:21.466525   29917 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0907 00:14:21.466554   29917 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:14:21.466562   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:14:21.466622   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:14:21.466699   29917 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:14:21.466707   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0907 00:14:21.466814   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:14:21.476899   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:14:21.500036   29917 start.go:303] post-start completed in 136.534127ms
	I0907 00:14:21.500058   29917 fix.go:56] fixHost completed within 21.29339094s
	I0907 00:14:21.500082   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:21.502541   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.502910   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.502948   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.503159   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:21.503370   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.503537   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.503682   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:21.503853   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:14:21.504289   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0907 00:14:21.504302   29917 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:14:21.623980   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694045661.571217449
	
	I0907 00:14:21.624000   29917 fix.go:206] guest clock: 1694045661.571217449
	I0907 00:14:21.624010   29917 fix.go:219] Guest: 2023-09-07 00:14:21.571217449 +0000 UTC Remote: 2023-09-07 00:14:21.500061567 +0000 UTC m=+319.317443060 (delta=71.155882ms)
	I0907 00:14:21.624071   29917 fix.go:190] guest clock delta is within tolerance: 71.155882ms
	I0907 00:14:21.624080   29917 start.go:83] releasing machines lock for "multinode-816061", held for 21.417442158s
	I0907 00:14:21.624108   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:21.624350   29917 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:14:21.626652   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.627018   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.627070   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.627242   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:21.627720   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:21.627925   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:14:21.628037   29917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:14:21.628091   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:21.628162   29917 ssh_runner.go:195] Run: cat /version.json
	I0907 00:14:21.628201   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:14:21.630849   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.631125   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.631235   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.631265   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.631374   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:21.631518   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:21.631542   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.631546   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:21.631707   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:14:21.631714   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:21.631887   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:14:21.631880   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:14:21.632034   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:14:21.632170   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:14:21.737338   29917 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0907 00:14:21.737419   29917 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0907 00:14:21.737529   29917 ssh_runner.go:195] Run: systemctl --version
	I0907 00:14:21.743112   29917 command_runner.go:130] > systemd 247 (247)
	I0907 00:14:21.743195   29917 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0907 00:14:21.743316   29917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:14:21.901773   29917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:14:21.907638   29917 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0907 00:14:21.907788   29917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:14:21.907862   29917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:14:21.923884   29917 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0907 00:14:21.923913   29917 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:14:21.923927   29917 start.go:466] detecting cgroup driver to use...
	I0907 00:14:21.923986   29917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:14:21.937676   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:14:21.950120   29917 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:14:21.950178   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:14:21.962817   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:14:21.975947   29917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:14:21.989758   29917 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0907 00:14:22.082141   29917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:14:22.095819   29917 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0907 00:14:22.198484   29917 docker.go:212] disabling docker service ...
	I0907 00:14:22.198567   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:14:22.211707   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:14:22.223161   29917 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0907 00:14:22.223440   29917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:14:22.236871   29917 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0907 00:14:22.331997   29917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:14:22.344983   29917 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0907 00:14:22.345480   29917 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0907 00:14:22.439522   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:14:22.451706   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:14:22.468831   29917 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0907 00:14:22.469296   29917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:14:22.469360   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:14:22.478760   29917 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:14:22.478841   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:14:22.488446   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:14:22.498021   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:14:22.507769   29917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:14:22.517683   29917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:14:22.525845   29917 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:14:22.525913   29917 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:14:22.525979   29917 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:14:22.539427   29917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:14:22.549339   29917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:14:22.656319   29917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:14:22.818960   29917 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:14:22.819050   29917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:14:22.824730   29917 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0907 00:14:22.824753   29917 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0907 00:14:22.824759   29917 command_runner.go:130] > Device: 16h/22d	Inode: 785         Links: 1
	I0907 00:14:22.824766   29917 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:14:22.824770   29917 command_runner.go:130] > Access: 2023-09-07 00:14:22.752697280 +0000
	I0907 00:14:22.824777   29917 command_runner.go:130] > Modify: 2023-09-07 00:14:22.752697280 +0000
	I0907 00:14:22.824781   29917 command_runner.go:130] > Change: 2023-09-07 00:14:22.753697280 +0000
	I0907 00:14:22.824785   29917 command_runner.go:130] >  Birth: -
	I0907 00:14:22.824799   29917 start.go:534] Will wait 60s for crictl version
	I0907 00:14:22.824835   29917 ssh_runner.go:195] Run: which crictl
	I0907 00:14:22.828548   29917 command_runner.go:130] > /usr/bin/crictl
	I0907 00:14:22.828630   29917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:14:22.859793   29917 command_runner.go:130] > Version:  0.1.0
	I0907 00:14:22.859810   29917 command_runner.go:130] > RuntimeName:  cri-o
	I0907 00:14:22.859814   29917 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0907 00:14:22.859819   29917 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0907 00:14:22.860072   29917 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:14:22.860158   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:14:22.913487   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:14:22.913516   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:14:22.913527   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:14:22.913533   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:14:22.913541   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:14:22.913548   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:14:22.913554   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:14:22.913560   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:14:22.913566   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:14:22.913577   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:14:22.913583   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:14:22.913590   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:14:22.915200   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:14:22.970588   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:14:22.970623   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:14:22.970635   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:14:22.970641   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:14:22.970650   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:14:22.970657   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:14:22.970663   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:14:22.970671   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:14:22.970683   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:14:22.970696   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:14:22.970703   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:14:22.970711   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:14:22.972726   29917 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:14:22.974182   29917 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:14:22.976777   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:22.977130   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:14:22.977166   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:14:22.977354   29917 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:14:22.981432   29917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:14:22.993970   29917 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:14:22.994032   29917 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:14:23.022851   29917 command_runner.go:130] > {
	I0907 00:14:23.022874   29917 command_runner.go:130] >   "images": [
	I0907 00:14:23.022879   29917 command_runner.go:130] >     {
	I0907 00:14:23.022891   29917 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0907 00:14:23.022899   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:23.022906   29917 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0907 00:14:23.022911   29917 command_runner.go:130] >       ],
	I0907 00:14:23.022928   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:23.022941   29917 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0907 00:14:23.022956   29917 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0907 00:14:23.022963   29917 command_runner.go:130] >       ],
	I0907 00:14:23.022974   29917 command_runner.go:130] >       "size": "750414",
	I0907 00:14:23.022981   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:23.022992   29917 command_runner.go:130] >         "value": "65535"
	I0907 00:14:23.022999   29917 command_runner.go:130] >       },
	I0907 00:14:23.023008   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:23.023018   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:23.023025   29917 command_runner.go:130] >     }
	I0907 00:14:23.023032   29917 command_runner.go:130] >   ]
	I0907 00:14:23.023040   29917 command_runner.go:130] > }
	I0907 00:14:23.023229   29917 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:14:23.023307   29917 ssh_runner.go:195] Run: which lz4
	I0907 00:14:23.027002   29917 command_runner.go:130] > /usr/bin/lz4
	I0907 00:14:23.027028   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0907 00:14:23.027115   29917 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:14:23.030960   29917 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:14:23.031179   29917 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:14:23.031201   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:14:24.818972   29917 crio.go:444] Took 1.791887 seconds to copy over tarball
	I0907 00:14:24.819042   29917 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:14:27.552354   29917 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.733277582s)
	I0907 00:14:27.552394   29917 crio.go:451] Took 2.733394 seconds to extract the tarball
	I0907 00:14:27.552405   29917 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:14:27.592885   29917 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:14:27.631826   29917 command_runner.go:130] > {
	I0907 00:14:27.631847   29917 command_runner.go:130] >   "images": [
	I0907 00:14:27.631854   29917 command_runner.go:130] >     {
	I0907 00:14:27.631866   29917 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0907 00:14:27.631872   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.631882   29917 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0907 00:14:27.631887   29917 command_runner.go:130] >       ],
	I0907 00:14:27.631893   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.631921   29917 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0907 00:14:27.631936   29917 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0907 00:14:27.631943   29917 command_runner.go:130] >       ],
	I0907 00:14:27.631953   29917 command_runner.go:130] >       "size": "65249302",
	I0907 00:14:27.631962   29917 command_runner.go:130] >       "uid": null,
	I0907 00:14:27.631968   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.631980   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.631990   29917 command_runner.go:130] >     },
	I0907 00:14:27.631996   29917 command_runner.go:130] >     {
	I0907 00:14:27.632007   29917 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0907 00:14:27.632018   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632027   29917 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0907 00:14:27.632035   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632043   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632057   29917 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0907 00:14:27.632067   29917 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0907 00:14:27.632073   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632077   29917 command_runner.go:130] >       "size": "31470524",
	I0907 00:14:27.632084   29917 command_runner.go:130] >       "uid": null,
	I0907 00:14:27.632095   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632103   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632106   29917 command_runner.go:130] >     },
	I0907 00:14:27.632112   29917 command_runner.go:130] >     {
	I0907 00:14:27.632118   29917 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0907 00:14:27.632124   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632131   29917 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0907 00:14:27.632138   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632142   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632151   29917 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0907 00:14:27.632160   29917 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0907 00:14:27.632166   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632170   29917 command_runner.go:130] >       "size": "53621675",
	I0907 00:14:27.632176   29917 command_runner.go:130] >       "uid": null,
	I0907 00:14:27.632180   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632186   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632190   29917 command_runner.go:130] >     },
	I0907 00:14:27.632199   29917 command_runner.go:130] >     {
	I0907 00:14:27.632207   29917 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0907 00:14:27.632213   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632218   29917 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0907 00:14:27.632224   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632228   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632237   29917 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0907 00:14:27.632248   29917 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0907 00:14:27.632254   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632258   29917 command_runner.go:130] >       "size": "295456551",
	I0907 00:14:27.632262   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:27.632266   29917 command_runner.go:130] >         "value": "0"
	I0907 00:14:27.632275   29917 command_runner.go:130] >       },
	I0907 00:14:27.632282   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632287   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632293   29917 command_runner.go:130] >     },
	I0907 00:14:27.632296   29917 command_runner.go:130] >     {
	I0907 00:14:27.632304   29917 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0907 00:14:27.632310   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632315   29917 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0907 00:14:27.632321   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632325   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632335   29917 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0907 00:14:27.632344   29917 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0907 00:14:27.632347   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632356   29917 command_runner.go:130] >       "size": "126972880",
	I0907 00:14:27.632360   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:27.632364   29917 command_runner.go:130] >         "value": "0"
	I0907 00:14:27.632370   29917 command_runner.go:130] >       },
	I0907 00:14:27.632374   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632380   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632384   29917 command_runner.go:130] >     },
	I0907 00:14:27.632389   29917 command_runner.go:130] >     {
	I0907 00:14:27.632399   29917 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0907 00:14:27.632407   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632419   29917 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0907 00:14:27.632428   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632443   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632459   29917 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0907 00:14:27.632474   29917 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0907 00:14:27.632483   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632493   29917 command_runner.go:130] >       "size": "123163446",
	I0907 00:14:27.632501   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:27.632515   29917 command_runner.go:130] >         "value": "0"
	I0907 00:14:27.632523   29917 command_runner.go:130] >       },
	I0907 00:14:27.632530   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632538   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632545   29917 command_runner.go:130] >     },
	I0907 00:14:27.632549   29917 command_runner.go:130] >     {
	I0907 00:14:27.632561   29917 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0907 00:14:27.632567   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632572   29917 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0907 00:14:27.632578   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632582   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632594   29917 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0907 00:14:27.632603   29917 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0907 00:14:27.632609   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632613   29917 command_runner.go:130] >       "size": "74680215",
	I0907 00:14:27.632617   29917 command_runner.go:130] >       "uid": null,
	I0907 00:14:27.632621   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632625   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632631   29917 command_runner.go:130] >     },
	I0907 00:14:27.632637   29917 command_runner.go:130] >     {
	I0907 00:14:27.632643   29917 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0907 00:14:27.632649   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632654   29917 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0907 00:14:27.632661   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632665   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632674   29917 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0907 00:14:27.632732   29917 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0907 00:14:27.632749   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632757   29917 command_runner.go:130] >       "size": "61477686",
	I0907 00:14:27.632765   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:27.632775   29917 command_runner.go:130] >         "value": "0"
	I0907 00:14:27.632783   29917 command_runner.go:130] >       },
	I0907 00:14:27.632791   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632800   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632807   29917 command_runner.go:130] >     },
	I0907 00:14:27.632815   29917 command_runner.go:130] >     {
	I0907 00:14:27.632829   29917 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0907 00:14:27.632839   29917 command_runner.go:130] >       "repoTags": [
	I0907 00:14:27.632849   29917 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0907 00:14:27.632855   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632864   29917 command_runner.go:130] >       "repoDigests": [
	I0907 00:14:27.632876   29917 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0907 00:14:27.632889   29917 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0907 00:14:27.632895   29917 command_runner.go:130] >       ],
	I0907 00:14:27.632902   29917 command_runner.go:130] >       "size": "750414",
	I0907 00:14:27.632909   29917 command_runner.go:130] >       "uid": {
	I0907 00:14:27.632918   29917 command_runner.go:130] >         "value": "65535"
	I0907 00:14:27.632924   29917 command_runner.go:130] >       },
	I0907 00:14:27.632935   29917 command_runner.go:130] >       "username": "",
	I0907 00:14:27.632943   29917 command_runner.go:130] >       "spec": null
	I0907 00:14:27.632950   29917 command_runner.go:130] >     }
	I0907 00:14:27.632954   29917 command_runner.go:130] >   ]
	I0907 00:14:27.632959   29917 command_runner.go:130] > }
	I0907 00:14:27.633355   29917 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:14:27.633374   29917 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:14:27.633438   29917 ssh_runner.go:195] Run: crio config
	I0907 00:14:27.684552   29917 command_runner.go:130] ! time="2023-09-07 00:14:27.631435118Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0907 00:14:27.684584   29917 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0907 00:14:27.694470   29917 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0907 00:14:27.694496   29917 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0907 00:14:27.694509   29917 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0907 00:14:27.694512   29917 command_runner.go:130] > #
	I0907 00:14:27.694520   29917 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0907 00:14:27.694526   29917 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0907 00:14:27.694532   29917 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0907 00:14:27.694549   29917 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0907 00:14:27.694555   29917 command_runner.go:130] > # reload'.
	I0907 00:14:27.694568   29917 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0907 00:14:27.694587   29917 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0907 00:14:27.694598   29917 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0907 00:14:27.694611   29917 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0907 00:14:27.694619   29917 command_runner.go:130] > [crio]
	I0907 00:14:27.694629   29917 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0907 00:14:27.694640   29917 command_runner.go:130] > # containers images, in this directory.
	I0907 00:14:27.694648   29917 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0907 00:14:27.694661   29917 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0907 00:14:27.694678   29917 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0907 00:14:27.694688   29917 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0907 00:14:27.694702   29917 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0907 00:14:27.694713   29917 command_runner.go:130] > storage_driver = "overlay"
	I0907 00:14:27.694727   29917 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0907 00:14:27.694738   29917 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0907 00:14:27.694741   29917 command_runner.go:130] > storage_option = [
	I0907 00:14:27.694746   29917 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0907 00:14:27.694749   29917 command_runner.go:130] > ]
	I0907 00:14:27.694758   29917 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0907 00:14:27.694764   29917 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0907 00:14:27.694768   29917 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0907 00:14:27.694789   29917 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0907 00:14:27.694803   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0907 00:14:27.694814   29917 command_runner.go:130] > # always happen on a node reboot
	I0907 00:14:27.694826   29917 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0907 00:14:27.694837   29917 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0907 00:14:27.694850   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0907 00:14:27.694864   29917 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0907 00:14:27.694871   29917 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0907 00:14:27.694880   29917 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0907 00:14:27.694890   29917 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0907 00:14:27.694895   29917 command_runner.go:130] > # internal_wipe = true
	I0907 00:14:27.694900   29917 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0907 00:14:27.694913   29917 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0907 00:14:27.694925   29917 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0907 00:14:27.694937   29917 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0907 00:14:27.694950   29917 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0907 00:14:27.694957   29917 command_runner.go:130] > [crio.api]
	I0907 00:14:27.694968   29917 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0907 00:14:27.694979   29917 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0907 00:14:27.694990   29917 command_runner.go:130] > # IP address on which the stream server will listen.
	I0907 00:14:27.694998   29917 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0907 00:14:27.695004   29917 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0907 00:14:27.695011   29917 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0907 00:14:27.695015   29917 command_runner.go:130] > # stream_port = "0"
	I0907 00:14:27.695022   29917 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0907 00:14:27.695026   29917 command_runner.go:130] > # stream_enable_tls = false
	I0907 00:14:27.695036   29917 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0907 00:14:27.695043   29917 command_runner.go:130] > # stream_idle_timeout = ""
	I0907 00:14:27.695049   29917 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0907 00:14:27.695054   29917 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0907 00:14:27.695060   29917 command_runner.go:130] > # minutes.
	I0907 00:14:27.695064   29917 command_runner.go:130] > # stream_tls_cert = ""
	I0907 00:14:27.695072   29917 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0907 00:14:27.695078   29917 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0907 00:14:27.695084   29917 command_runner.go:130] > # stream_tls_key = ""
	I0907 00:14:27.695090   29917 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0907 00:14:27.695095   29917 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0907 00:14:27.695102   29917 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0907 00:14:27.695106   29917 command_runner.go:130] > # stream_tls_ca = ""
	I0907 00:14:27.695115   29917 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:14:27.695122   29917 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0907 00:14:27.695133   29917 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:14:27.695140   29917 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0907 00:14:27.695161   29917 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0907 00:14:27.695172   29917 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0907 00:14:27.695178   29917 command_runner.go:130] > [crio.runtime]
	I0907 00:14:27.695184   29917 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0907 00:14:27.695191   29917 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0907 00:14:27.695197   29917 command_runner.go:130] > # "nofile=1024:2048"
	I0907 00:14:27.695203   29917 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0907 00:14:27.695209   29917 command_runner.go:130] > # default_ulimits = [
	I0907 00:14:27.695213   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695221   29917 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0907 00:14:27.695227   29917 command_runner.go:130] > # no_pivot = false
	I0907 00:14:27.695233   29917 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0907 00:14:27.695243   29917 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0907 00:14:27.695250   29917 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0907 00:14:27.695256   29917 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0907 00:14:27.695263   29917 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0907 00:14:27.695269   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:14:27.695276   29917 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0907 00:14:27.695281   29917 command_runner.go:130] > # Cgroup setting for conmon
	I0907 00:14:27.695293   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0907 00:14:27.695299   29917 command_runner.go:130] > conmon_cgroup = "pod"
	I0907 00:14:27.695305   29917 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0907 00:14:27.695312   29917 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0907 00:14:27.695318   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:14:27.695325   29917 command_runner.go:130] > conmon_env = [
	I0907 00:14:27.695330   29917 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0907 00:14:27.695336   29917 command_runner.go:130] > ]
	I0907 00:14:27.695341   29917 command_runner.go:130] > # Additional environment variables to set for all the
	I0907 00:14:27.695349   29917 command_runner.go:130] > # containers. These are overridden if set in the
	I0907 00:14:27.695356   29917 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0907 00:14:27.695361   29917 command_runner.go:130] > # default_env = [
	I0907 00:14:27.695365   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695373   29917 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0907 00:14:27.695377   29917 command_runner.go:130] > # selinux = false
	I0907 00:14:27.695384   29917 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0907 00:14:27.695392   29917 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0907 00:14:27.695399   29917 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0907 00:14:27.695406   29917 command_runner.go:130] > # seccomp_profile = ""
	I0907 00:14:27.695414   29917 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0907 00:14:27.695419   29917 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0907 00:14:27.695428   29917 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0907 00:14:27.695432   29917 command_runner.go:130] > # which might increase security.
	I0907 00:14:27.695439   29917 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0907 00:14:27.695446   29917 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0907 00:14:27.695454   29917 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0907 00:14:27.695460   29917 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0907 00:14:27.695466   29917 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0907 00:14:27.695473   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:14:27.695477   29917 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0907 00:14:27.695485   29917 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0907 00:14:27.695489   29917 command_runner.go:130] > # the cgroup blockio controller.
	I0907 00:14:27.695493   29917 command_runner.go:130] > # blockio_config_file = ""
	I0907 00:14:27.695504   29917 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0907 00:14:27.695510   29917 command_runner.go:130] > # irqbalance daemon.
	I0907 00:14:27.695515   29917 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0907 00:14:27.695523   29917 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0907 00:14:27.695530   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:14:27.695534   29917 command_runner.go:130] > # rdt_config_file = ""
	I0907 00:14:27.695542   29917 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0907 00:14:27.695546   29917 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0907 00:14:27.695554   29917 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0907 00:14:27.695560   29917 command_runner.go:130] > # separate_pull_cgroup = ""
	I0907 00:14:27.695567   29917 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0907 00:14:27.695575   29917 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0907 00:14:27.695579   29917 command_runner.go:130] > # will be added.
	I0907 00:14:27.695583   29917 command_runner.go:130] > # default_capabilities = [
	I0907 00:14:27.695589   29917 command_runner.go:130] > # 	"CHOWN",
	I0907 00:14:27.695593   29917 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0907 00:14:27.695599   29917 command_runner.go:130] > # 	"FSETID",
	I0907 00:14:27.695603   29917 command_runner.go:130] > # 	"FOWNER",
	I0907 00:14:27.695609   29917 command_runner.go:130] > # 	"SETGID",
	I0907 00:14:27.695612   29917 command_runner.go:130] > # 	"SETUID",
	I0907 00:14:27.695620   29917 command_runner.go:130] > # 	"SETPCAP",
	I0907 00:14:27.695627   29917 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0907 00:14:27.695633   29917 command_runner.go:130] > # 	"KILL",
	I0907 00:14:27.695636   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695644   29917 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0907 00:14:27.695652   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:14:27.695657   29917 command_runner.go:130] > # default_sysctls = [
	I0907 00:14:27.695660   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695665   29917 command_runner.go:130] > # List of devices on the host that a
	I0907 00:14:27.695671   29917 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0907 00:14:27.695680   29917 command_runner.go:130] > # allowed_devices = [
	I0907 00:14:27.695686   29917 command_runner.go:130] > # 	"/dev/fuse",
	I0907 00:14:27.695690   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695697   29917 command_runner.go:130] > # List of additional devices. specified as
	I0907 00:14:27.695706   29917 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0907 00:14:27.695713   29917 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0907 00:14:27.695739   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:14:27.695749   29917 command_runner.go:130] > # additional_devices = [
	I0907 00:14:27.695753   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695760   29917 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0907 00:14:27.695764   29917 command_runner.go:130] > # cdi_spec_dirs = [
	I0907 00:14:27.695770   29917 command_runner.go:130] > # 	"/etc/cdi",
	I0907 00:14:27.695773   29917 command_runner.go:130] > # 	"/var/run/cdi",
	I0907 00:14:27.695779   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695788   29917 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0907 00:14:27.695796   29917 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0907 00:14:27.695802   29917 command_runner.go:130] > # Defaults to false.
	I0907 00:14:27.695807   29917 command_runner.go:130] > # device_ownership_from_security_context = false
	I0907 00:14:27.695815   29917 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0907 00:14:27.695823   29917 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0907 00:14:27.695829   29917 command_runner.go:130] > # hooks_dir = [
	I0907 00:14:27.695834   29917 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0907 00:14:27.695839   29917 command_runner.go:130] > # ]
	I0907 00:14:27.695845   29917 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0907 00:14:27.695853   29917 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0907 00:14:27.695858   29917 command_runner.go:130] > # its default mounts from the following two files:
	I0907 00:14:27.695864   29917 command_runner.go:130] > #
	I0907 00:14:27.695872   29917 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0907 00:14:27.695881   29917 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0907 00:14:27.695889   29917 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0907 00:14:27.695897   29917 command_runner.go:130] > #
	I0907 00:14:27.695902   29917 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0907 00:14:27.695911   29917 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0907 00:14:27.695919   29917 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0907 00:14:27.695926   29917 command_runner.go:130] > #      only add mounts it finds in this file.
	I0907 00:14:27.695931   29917 command_runner.go:130] > #
	I0907 00:14:27.695935   29917 command_runner.go:130] > # default_mounts_file = ""
	I0907 00:14:27.695942   29917 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0907 00:14:27.695949   29917 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0907 00:14:27.695955   29917 command_runner.go:130] > pids_limit = 1024
	I0907 00:14:27.695961   29917 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0907 00:14:27.695969   29917 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0907 00:14:27.695977   29917 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0907 00:14:27.695984   29917 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0907 00:14:27.695990   29917 command_runner.go:130] > # log_size_max = -1
	I0907 00:14:27.695999   29917 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0907 00:14:27.696005   29917 command_runner.go:130] > # log_to_journald = false
	I0907 00:14:27.696011   29917 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0907 00:14:27.696018   29917 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0907 00:14:27.696023   29917 command_runner.go:130] > # Path to directory for container attach sockets.
	I0907 00:14:27.696030   29917 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0907 00:14:27.696035   29917 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0907 00:14:27.696039   29917 command_runner.go:130] > # bind_mount_prefix = ""
	I0907 00:14:27.696045   29917 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0907 00:14:27.696051   29917 command_runner.go:130] > # read_only = false
	I0907 00:14:27.696057   29917 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0907 00:14:27.696065   29917 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0907 00:14:27.696072   29917 command_runner.go:130] > # live configuration reload.
	I0907 00:14:27.696076   29917 command_runner.go:130] > # log_level = "info"
	I0907 00:14:27.696083   29917 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0907 00:14:27.696089   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:14:27.696092   29917 command_runner.go:130] > # log_filter = ""
	I0907 00:14:27.696098   29917 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0907 00:14:27.696108   29917 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0907 00:14:27.696114   29917 command_runner.go:130] > # separated by comma.
	I0907 00:14:27.696118   29917 command_runner.go:130] > # uid_mappings = ""
	I0907 00:14:27.696126   29917 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0907 00:14:27.696141   29917 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0907 00:14:27.696151   29917 command_runner.go:130] > # separated by comma.
	I0907 00:14:27.696157   29917 command_runner.go:130] > # gid_mappings = ""
	I0907 00:14:27.696163   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0907 00:14:27.696171   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:14:27.696177   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:14:27.696184   29917 command_runner.go:130] > # minimum_mappable_uid = -1
	I0907 00:14:27.696190   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0907 00:14:27.696198   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:14:27.696204   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:14:27.696210   29917 command_runner.go:130] > # minimum_mappable_gid = -1
	I0907 00:14:27.696216   29917 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0907 00:14:27.696224   29917 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0907 00:14:27.696232   29917 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0907 00:14:27.696237   29917 command_runner.go:130] > # ctr_stop_timeout = 30
	I0907 00:14:27.696246   29917 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0907 00:14:27.696254   29917 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0907 00:14:27.696261   29917 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0907 00:14:27.696265   29917 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0907 00:14:27.696272   29917 command_runner.go:130] > drop_infra_ctr = false
	I0907 00:14:27.696278   29917 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0907 00:14:27.696286   29917 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0907 00:14:27.696293   29917 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0907 00:14:27.696299   29917 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0907 00:14:27.696305   29917 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0907 00:14:27.696311   29917 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0907 00:14:27.696316   29917 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0907 00:14:27.696324   29917 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0907 00:14:27.696329   29917 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0907 00:14:27.696337   29917 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0907 00:14:27.696344   29917 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0907 00:14:27.696352   29917 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0907 00:14:27.696361   29917 command_runner.go:130] > # default_runtime = "runc"
	I0907 00:14:27.696366   29917 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0907 00:14:27.696375   29917 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0907 00:14:27.696388   29917 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0907 00:14:27.696397   29917 command_runner.go:130] > # creation as a file is not desired either.
	I0907 00:14:27.696407   29917 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0907 00:14:27.696415   29917 command_runner.go:130] > # the hostname is being managed dynamically.
	I0907 00:14:27.696419   29917 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0907 00:14:27.696425   29917 command_runner.go:130] > # ]
	I0907 00:14:27.696431   29917 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0907 00:14:27.696440   29917 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0907 00:14:27.696447   29917 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0907 00:14:27.696456   29917 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0907 00:14:27.696464   29917 command_runner.go:130] > #
	I0907 00:14:27.696470   29917 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0907 00:14:27.696477   29917 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0907 00:14:27.696487   29917 command_runner.go:130] > #  runtime_type = "oci"
	I0907 00:14:27.696494   29917 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0907 00:14:27.696509   29917 command_runner.go:130] > #  privileged_without_host_devices = false
	I0907 00:14:27.696519   29917 command_runner.go:130] > #  allowed_annotations = []
	I0907 00:14:27.696527   29917 command_runner.go:130] > # Where:
	I0907 00:14:27.696537   29917 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0907 00:14:27.696550   29917 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0907 00:14:27.696563   29917 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0907 00:14:27.696575   29917 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0907 00:14:27.696585   29917 command_runner.go:130] > #   in $PATH.
	I0907 00:14:27.696597   29917 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0907 00:14:27.696608   29917 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0907 00:14:27.696621   29917 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0907 00:14:27.696634   29917 command_runner.go:130] > #   state.
	I0907 00:14:27.696645   29917 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0907 00:14:27.696658   29917 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0907 00:14:27.696671   29917 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0907 00:14:27.696683   29917 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0907 00:14:27.696696   29917 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0907 00:14:27.696706   29917 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0907 00:14:27.696714   29917 command_runner.go:130] > #   The currently recognized values are:
	I0907 00:14:27.696722   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0907 00:14:27.696731   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0907 00:14:27.696739   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0907 00:14:27.696749   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0907 00:14:27.696758   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0907 00:14:27.696767   29917 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0907 00:14:27.696778   29917 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0907 00:14:27.696786   29917 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0907 00:14:27.696793   29917 command_runner.go:130] > #   should be moved to the container's cgroup
	I0907 00:14:27.696798   29917 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0907 00:14:27.696804   29917 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0907 00:14:27.696808   29917 command_runner.go:130] > runtime_type = "oci"
	I0907 00:14:27.696814   29917 command_runner.go:130] > runtime_root = "/run/runc"
	I0907 00:14:27.696819   29917 command_runner.go:130] > runtime_config_path = ""
	I0907 00:14:27.696825   29917 command_runner.go:130] > monitor_path = ""
	I0907 00:14:27.696829   29917 command_runner.go:130] > monitor_cgroup = ""
	I0907 00:14:27.696836   29917 command_runner.go:130] > monitor_exec_cgroup = ""
	I0907 00:14:27.696844   29917 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0907 00:14:27.696850   29917 command_runner.go:130] > # running containers
	I0907 00:14:27.696854   29917 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0907 00:14:27.696862   29917 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0907 00:14:27.696927   29917 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0907 00:14:27.696938   29917 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0907 00:14:27.696943   29917 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0907 00:14:27.696948   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0907 00:14:27.696952   29917 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0907 00:14:27.696958   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0907 00:14:27.696963   29917 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0907 00:14:27.696970   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0907 00:14:27.696976   29917 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0907 00:14:27.696983   29917 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0907 00:14:27.696992   29917 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0907 00:14:27.696999   29917 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0907 00:14:27.697009   29917 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0907 00:14:27.697016   29917 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0907 00:14:27.697030   29917 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0907 00:14:27.697040   29917 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0907 00:14:27.697045   29917 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0907 00:14:27.697054   29917 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0907 00:14:27.697063   29917 command_runner.go:130] > # Example:
	I0907 00:14:27.697071   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0907 00:14:27.697075   29917 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0907 00:14:27.697083   29917 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0907 00:14:27.697087   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0907 00:14:27.697091   29917 command_runner.go:130] > # cpuset = 0
	I0907 00:14:27.697096   29917 command_runner.go:130] > # cpushares = "0-1"
	I0907 00:14:27.697099   29917 command_runner.go:130] > # Where:
	I0907 00:14:27.697104   29917 command_runner.go:130] > # The workload name is workload-type.
	I0907 00:14:27.697113   29917 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0907 00:14:27.697120   29917 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0907 00:14:27.697126   29917 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0907 00:14:27.697139   29917 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0907 00:14:27.697146   29917 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0907 00:14:27.697153   29917 command_runner.go:130] > # 
	I0907 00:14:27.697161   29917 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0907 00:14:27.697166   29917 command_runner.go:130] > #
	I0907 00:14:27.697172   29917 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0907 00:14:27.697180   29917 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0907 00:14:27.697190   29917 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0907 00:14:27.697198   29917 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0907 00:14:27.697205   29917 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0907 00:14:27.697209   29917 command_runner.go:130] > [crio.image]
	I0907 00:14:27.697215   29917 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0907 00:14:27.697221   29917 command_runner.go:130] > # default_transport = "docker://"
	I0907 00:14:27.697227   29917 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0907 00:14:27.697236   29917 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:14:27.697242   29917 command_runner.go:130] > # global_auth_file = ""
	I0907 00:14:27.697247   29917 command_runner.go:130] > # The image used to instantiate infra containers.
	I0907 00:14:27.697255   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:14:27.697262   29917 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0907 00:14:27.697268   29917 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0907 00:14:27.697278   29917 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:14:27.697285   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:14:27.697290   29917 command_runner.go:130] > # pause_image_auth_file = ""
	I0907 00:14:27.697297   29917 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0907 00:14:27.697303   29917 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0907 00:14:27.697313   29917 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0907 00:14:27.697321   29917 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0907 00:14:27.697327   29917 command_runner.go:130] > # pause_command = "/pause"
	I0907 00:14:27.697333   29917 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0907 00:14:27.697341   29917 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0907 00:14:27.697349   29917 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0907 00:14:27.697357   29917 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0907 00:14:27.697364   29917 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0907 00:14:27.697369   29917 command_runner.go:130] > # signature_policy = ""
	I0907 00:14:27.697376   29917 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0907 00:14:27.697382   29917 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0907 00:14:27.697388   29917 command_runner.go:130] > # changing them here.
	I0907 00:14:27.697392   29917 command_runner.go:130] > # insecure_registries = [
	I0907 00:14:27.697398   29917 command_runner.go:130] > # ]
	I0907 00:14:27.697406   29917 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0907 00:14:27.697411   29917 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0907 00:14:27.697415   29917 command_runner.go:130] > # image_volumes = "mkdir"
	I0907 00:14:27.697419   29917 command_runner.go:130] > # Temporary directory to use for storing big files
	I0907 00:14:27.697423   29917 command_runner.go:130] > # big_files_temporary_dir = ""
	I0907 00:14:27.697429   29917 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0907 00:14:27.697432   29917 command_runner.go:130] > # CNI plugins.
	I0907 00:14:27.697436   29917 command_runner.go:130] > [crio.network]
	I0907 00:14:27.697441   29917 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0907 00:14:27.697446   29917 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0907 00:14:27.697450   29917 command_runner.go:130] > # cni_default_network = ""
	I0907 00:14:27.697455   29917 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0907 00:14:27.697461   29917 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0907 00:14:27.697466   29917 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0907 00:14:27.697470   29917 command_runner.go:130] > # plugin_dirs = [
	I0907 00:14:27.697473   29917 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0907 00:14:27.697476   29917 command_runner.go:130] > # ]
	I0907 00:14:27.697484   29917 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0907 00:14:27.697487   29917 command_runner.go:130] > [crio.metrics]
	I0907 00:14:27.697492   29917 command_runner.go:130] > # Globally enable or disable metrics support.
	I0907 00:14:27.697495   29917 command_runner.go:130] > enable_metrics = true
	I0907 00:14:27.697500   29917 command_runner.go:130] > # Specify enabled metrics collectors.
	I0907 00:14:27.697504   29917 command_runner.go:130] > # Per default all metrics are enabled.
	I0907 00:14:27.697509   29917 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0907 00:14:27.697515   29917 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0907 00:14:27.697520   29917 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0907 00:14:27.697526   29917 command_runner.go:130] > # metrics_collectors = [
	I0907 00:14:27.697529   29917 command_runner.go:130] > # 	"operations",
	I0907 00:14:27.697533   29917 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0907 00:14:27.697538   29917 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0907 00:14:27.697544   29917 command_runner.go:130] > # 	"operations_errors",
	I0907 00:14:27.697547   29917 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0907 00:14:27.697551   29917 command_runner.go:130] > # 	"image_pulls_by_name",
	I0907 00:14:27.697555   29917 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0907 00:14:27.697559   29917 command_runner.go:130] > # 	"image_pulls_failures",
	I0907 00:14:27.697565   29917 command_runner.go:130] > # 	"image_pulls_successes",
	I0907 00:14:27.697568   29917 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0907 00:14:27.697572   29917 command_runner.go:130] > # 	"image_layer_reuse",
	I0907 00:14:27.697576   29917 command_runner.go:130] > # 	"containers_oom_total",
	I0907 00:14:27.697579   29917 command_runner.go:130] > # 	"containers_oom",
	I0907 00:14:27.697583   29917 command_runner.go:130] > # 	"processes_defunct",
	I0907 00:14:27.697586   29917 command_runner.go:130] > # 	"operations_total",
	I0907 00:14:27.697591   29917 command_runner.go:130] > # 	"operations_latency_seconds",
	I0907 00:14:27.697600   29917 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0907 00:14:27.697607   29917 command_runner.go:130] > # 	"operations_errors_total",
	I0907 00:14:27.697611   29917 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0907 00:14:27.697617   29917 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0907 00:14:27.697622   29917 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0907 00:14:27.697629   29917 command_runner.go:130] > # 	"image_pulls_success_total",
	I0907 00:14:27.697633   29917 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0907 00:14:27.697640   29917 command_runner.go:130] > # 	"containers_oom_count_total",
	I0907 00:14:27.697643   29917 command_runner.go:130] > # ]
	I0907 00:14:27.697648   29917 command_runner.go:130] > # The port on which the metrics server will listen.
	I0907 00:14:27.697657   29917 command_runner.go:130] > # metrics_port = 9090
	I0907 00:14:27.697665   29917 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0907 00:14:27.697669   29917 command_runner.go:130] > # metrics_socket = ""
	I0907 00:14:27.697676   29917 command_runner.go:130] > # The certificate for the secure metrics server.
	I0907 00:14:27.697682   29917 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0907 00:14:27.697690   29917 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0907 00:14:27.697695   29917 command_runner.go:130] > # certificate on any modification event.
	I0907 00:14:27.697701   29917 command_runner.go:130] > # metrics_cert = ""
	I0907 00:14:27.697706   29917 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0907 00:14:27.697713   29917 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0907 00:14:27.697717   29917 command_runner.go:130] > # metrics_key = ""
	I0907 00:14:27.697724   29917 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0907 00:14:27.697730   29917 command_runner.go:130] > [crio.tracing]
	I0907 00:14:27.697736   29917 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0907 00:14:27.697740   29917 command_runner.go:130] > # enable_tracing = false
	I0907 00:14:27.697745   29917 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0907 00:14:27.697752   29917 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0907 00:14:27.697757   29917 command_runner.go:130] > # Number of samples to collect per million spans.
	I0907 00:14:27.697767   29917 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0907 00:14:27.697775   29917 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0907 00:14:27.697781   29917 command_runner.go:130] > [crio.stats]
	I0907 00:14:27.697786   29917 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0907 00:14:27.697794   29917 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0907 00:14:27.697798   29917 command_runner.go:130] > # stats_collection_period = 0
	I0907 00:14:27.697868   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:14:27.697879   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:14:27.697895   29917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:14:27.697913   29917 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-816061 NodeName:multinode-816061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:14:27.698026   29917 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-816061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:14:27.698079   29917 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-816061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:14:27.698123   29917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:14:27.707888   29917 command_runner.go:130] > kubeadm
	I0907 00:14:27.707910   29917 command_runner.go:130] > kubectl
	I0907 00:14:27.707914   29917 command_runner.go:130] > kubelet
	I0907 00:14:27.708062   29917 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:14:27.708125   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:14:27.717217   29917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0907 00:14:27.732706   29917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:14:27.748857   29917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0907 00:14:27.764963   29917 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0907 00:14:27.768722   29917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:14:27.780161   29917 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061 for IP: 192.168.39.212
	I0907 00:14:27.780198   29917 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:14:27.780359   29917 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:14:27.780395   29917 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:14:27.780485   29917 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key
	I0907 00:14:27.780569   29917 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key.543da273
	I0907 00:14:27.780623   29917 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key
	I0907 00:14:27.780635   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0907 00:14:27.780655   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0907 00:14:27.780665   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0907 00:14:27.780679   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0907 00:14:27.780690   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0907 00:14:27.780701   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0907 00:14:27.780712   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0907 00:14:27.780721   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0907 00:14:27.780774   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:14:27.780799   29917 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:14:27.780813   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:14:27.780838   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:14:27.780862   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:14:27.780886   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:14:27.780929   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:14:27.780955   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0907 00:14:27.780968   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:14:27.780980   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0907 00:14:27.781529   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:14:27.804276   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:14:27.827156   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:14:27.849222   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:14:27.871218   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:14:27.893393   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:14:27.915061   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:14:27.937439   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:14:27.960344   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:14:27.982293   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:14:28.005057   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:14:28.028533   29917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:14:28.047035   29917 ssh_runner.go:195] Run: openssl version
	I0907 00:14:28.052542   29917 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0907 00:14:28.053018   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:14:28.064689   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:14:28.069644   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:14:28.069744   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:14:28.069806   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:14:28.075331   29917 command_runner.go:130] > b5213941
	I0907 00:14:28.075543   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:14:28.086769   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:14:28.097926   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:14:28.102371   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:14:28.102718   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:14:28.102759   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:14:28.108057   29917 command_runner.go:130] > 51391683
	I0907 00:14:28.108332   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:14:28.119142   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:14:28.130178   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:14:28.134691   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:14:28.134831   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:14:28.134885   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:14:28.140381   29917 command_runner.go:130] > 3ec20f2e
	I0907 00:14:28.140469   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:14:28.151327   29917 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:14:28.156139   29917 command_runner.go:130] > ca.crt
	I0907 00:14:28.156154   29917 command_runner.go:130] > ca.key
	I0907 00:14:28.156162   29917 command_runner.go:130] > healthcheck-client.crt
	I0907 00:14:28.156166   29917 command_runner.go:130] > healthcheck-client.key
	I0907 00:14:28.156170   29917 command_runner.go:130] > peer.crt
	I0907 00:14:28.156174   29917 command_runner.go:130] > peer.key
	I0907 00:14:28.156177   29917 command_runner.go:130] > server.crt
	I0907 00:14:28.156181   29917 command_runner.go:130] > server.key
	I0907 00:14:28.156220   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:14:28.161887   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.162100   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:14:28.167782   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.167846   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:14:28.173469   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.173535   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:14:28.179396   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.179452   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:14:28.185102   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.185150   29917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:14:28.190577   29917 command_runner.go:130] > Certificate will not expire
	I0907 00:14:28.190906   29917 kubeadm.go:404] StartCluster: {Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:14:28.191058   29917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:14:28.191097   29917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:14:28.232882   29917 cri.go:89] found id: ""
	I0907 00:14:28.232953   29917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:14:28.243467   29917 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0907 00:14:28.243492   29917 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0907 00:14:28.243501   29917 command_runner.go:130] > /var/lib/minikube/etcd:
	I0907 00:14:28.243506   29917 command_runner.go:130] > member
	I0907 00:14:28.243544   29917 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:14:28.243566   29917 kubeadm.go:636] restartCluster start
	I0907 00:14:28.243695   29917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:14:28.255136   29917 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:28.255655   29917 kubeconfig.go:92] found "multinode-816061" server: "https://192.168.39.212:8443"
	I0907 00:14:28.256075   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:14:28.256311   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:14:28.256979   29917 cert_rotation.go:137] Starting client certificate rotation controller
	I0907 00:14:28.257115   29917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:14:28.268344   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:28.268393   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:28.280848   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:28.280873   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:28.280913   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:28.291965   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:28.792770   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:28.792842   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:28.805262   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:29.292909   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:29.292976   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:29.306800   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:29.792314   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:29.792387   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:29.805634   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:30.292342   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:30.292437   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:30.305204   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:30.792848   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:30.792931   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:30.805097   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:31.292742   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:31.292811   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:31.305792   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:31.792327   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:31.792407   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:31.805061   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:32.292930   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:32.293009   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:32.305413   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:32.792378   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:32.792468   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:32.804913   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:33.292472   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:33.292555   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:33.304997   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:33.792504   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:33.792578   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:33.805008   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:34.292537   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:34.292629   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:34.304926   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:34.792456   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:34.792535   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:34.805172   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:35.292141   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:35.292221   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:35.304544   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:35.792124   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:35.792223   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:35.804754   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:36.292307   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:36.292381   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:36.304828   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:36.792381   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:36.792477   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:36.804453   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:37.292118   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:37.292206   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:37.304687   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:37.792767   29917 api_server.go:166] Checking apiserver status ...
	I0907 00:14:37.792841   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:14:37.805016   29917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:14:38.268779   29917 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:14:38.268824   29917 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:14:38.268857   29917 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:14:38.268914   29917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:14:38.300214   29917 cri.go:89] found id: ""
	I0907 00:14:38.300273   29917 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:14:38.316286   29917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:14:38.325683   29917 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0907 00:14:38.325709   29917 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0907 00:14:38.325719   29917 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0907 00:14:38.325731   29917 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:14:38.325870   29917 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:14:38.325938   29917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:14:38.335538   29917 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:14:38.335567   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:38.451131   29917 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:14:38.451156   29917 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0907 00:14:38.451166   29917 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0907 00:14:38.451175   29917 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:14:38.451184   29917 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0907 00:14:38.451194   29917 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:14:38.451206   29917 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0907 00:14:38.451217   29917 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0907 00:14:38.451243   29917 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:14:38.451253   29917 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:14:38.451260   29917 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:14:38.451266   29917 command_runner.go:130] > [certs] Using the existing "sa" key
	I0907 00:14:38.451281   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:38.507967   29917 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:14:38.630656   29917 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:14:38.867647   29917 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:14:39.065907   29917 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:14:39.250978   29917 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:14:39.253675   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:39.432489   29917 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:14:39.432516   29917 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:14:39.432521   29917 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0907 00:14:39.432547   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:39.520123   29917 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:14:39.520322   29917 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:14:39.523648   29917 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:14:39.524942   29917 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:14:39.527319   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:39.601627   29917 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:14:39.601671   29917 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:14:39.601742   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:39.613488   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:40.127999   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:40.627547   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:41.127823   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:41.627813   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:42.128224   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:14:42.151537   29917 command_runner.go:130] > 1098
	I0907 00:14:42.151965   29917 api_server.go:72] duration metric: took 2.550290209s to wait for apiserver process to appear ...
	I0907 00:14:42.151987   29917 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:14:42.152005   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:14:46.158275   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:14:46.158308   29917 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:14:46.158321   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:14:46.207410   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:14:46.207442   29917 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:14:46.707924   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:14:46.713003   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:14:46.713024   29917 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:14:47.207794   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:14:47.215918   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:14:47.217797   29917 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:14:47.708363   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:14:47.718559   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0907 00:14:47.718647   29917 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I0907 00:14:47.718658   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:47.718673   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:47.718685   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:47.734874   29917 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0907 00:14:47.734901   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:47.734909   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:47.734915   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:47.734920   29917 round_trippers.go:580]     Content-Length: 263
	I0907 00:14:47.734925   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:47 GMT
	I0907 00:14:47.734931   29917 round_trippers.go:580]     Audit-Id: 4fb04a4f-a718-47da-89a9-068801d11f0c
	I0907 00:14:47.734936   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:47.734944   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:47.734963   29917 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0907 00:14:47.735032   29917 api_server.go:141] control plane version: v1.28.1
	I0907 00:14:47.735046   29917 api_server.go:131] duration metric: took 5.583053784s to wait for apiserver health ...
	I0907 00:14:47.735053   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:14:47.735064   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:14:47.736754   29917 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0907 00:14:47.738234   29917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:14:47.752738   29917 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0907 00:14:47.752761   29917 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0907 00:14:47.752770   29917 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0907 00:14:47.752781   29917 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:14:47.752791   29917 command_runner.go:130] > Access: 2023-09-07 00:14:12.931697280 +0000
	I0907 00:14:47.752797   29917 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0907 00:14:47.752805   29917 command_runner.go:130] > Change: 2023-09-07 00:14:11.094697280 +0000
	I0907 00:14:47.752812   29917 command_runner.go:130] >  Birth: -
	I0907 00:14:47.752856   29917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 00:14:47.752867   29917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 00:14:47.825535   29917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:14:48.948553   29917 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:14:48.954470   29917 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:14:48.959050   29917 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0907 00:14:48.983263   29917 command_runner.go:130] > daemonset.apps/kindnet configured
	I0907 00:14:48.986069   29917 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.160499582s)
	I0907 00:14:48.986099   29917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:14:48.986192   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:14:48.986203   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:48.986213   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:48.986227   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:48.990528   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:14:48.990548   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:48.990557   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:48.990567   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:48.990575   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:48.990605   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:48 GMT
	I0907 00:14:48.990619   29917 round_trippers.go:580]     Audit-Id: 544ed435-d791-423d-a36d-935a8d1d3328
	I0907 00:14:48.990632   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:48.992196   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83210 chars]
	I0907 00:14:48.996081   29917 system_pods.go:59] 12 kube-system pods found
	I0907 00:14:48.996128   29917 system_pods.go:61] "coredns-5dd5756b68-8ktxh" [c2574ba0-f19a-40c1-a06f-601bb17661f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:14:48.996141   29917 system_pods.go:61] "etcd-multinode-816061" [7ff498e1-17ed-4818-befa-68a5a69b96d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:14:48.996151   29917 system_pods.go:61] "kindnet-9qj9n" [d137582e-041a-4af3-b93e-47e965a488c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:14:48.996161   29917 system_pods.go:61] "kindnet-gdck2" [d6762e3d-d971-416f-a45f-bc08ebcfb175] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:14:48.996175   29917 system_pods.go:61] "kindnet-xgbtc" [137c032b-12d1-4179-8416-0f3cc5733842] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:14:48.996187   29917 system_pods.go:61] "kube-apiserver-multinode-816061" [dbbbc2db-98c3-44e3-a18d-947bad7ffda2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:14:48.996202   29917 system_pods.go:61] "kube-controller-manager-multinode-816061" [ea192806-6f42-4471-8e73-ae96aa3bfa06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:14:48.996219   29917 system_pods.go:61] "kube-proxy-2wswp" [4d99412b-fc2d-4fce-a7e2-80da3e220e07] Running
	I0907 00:14:48.996226   29917 system_pods.go:61] "kube-proxy-dlt4x" [2c56690f-de33-49ec-8cad-79fdae731daa] Running
	I0907 00:14:48.996233   29917 system_pods.go:61] "kube-proxy-tbzlv" [6b9717d8-174b-4713-a941-382c81cc659e] Running
	I0907 00:14:48.996243   29917 system_pods.go:61] "kube-scheduler-multinode-816061" [3fa4fad1-c309-42a9-af5f-28e6398492c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:14:48.996253   29917 system_pods.go:61] "storage-provisioner" [3ce467f7-aaa1-4391-9bc9-39ef0521ebd2] Running
	I0907 00:14:48.996261   29917 system_pods.go:74] duration metric: took 10.155012ms to wait for pod list to return data ...
	I0907 00:14:48.996274   29917 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:14:48.996333   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:14:48.996342   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:48.996354   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:48.996364   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.000162   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:49.000186   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.000197   29917 round_trippers.go:580]     Audit-Id: da631608-8de0-4cc9-bebe-25b2207f4b0e
	I0907 00:14:49.000206   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.000215   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.000223   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.000231   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.000246   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:48 GMT
	I0907 00:14:49.000670   29917 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15371 chars]
	I0907 00:14:49.001601   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:14:49.001623   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:14:49.001632   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:14:49.001636   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:14:49.001639   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:14:49.001643   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:14:49.001647   29917 node_conditions.go:105] duration metric: took 5.368645ms to run NodePressure ...
	I0907 00:14:49.001664   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:14:49.201027   29917 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0907 00:14:49.318235   29917 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0907 00:14:49.319882   29917 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:14:49.320000   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0907 00:14:49.320011   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.320022   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.320034   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.323935   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:49.323956   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.323965   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.323974   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.323982   29917 round_trippers.go:580]     Audit-Id: 5789b579-111d-4f98-afdb-fad5be79004b
	I0907 00:14:49.323991   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.324000   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.324009   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.324503   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0907 00:14:49.325447   29917 kubeadm.go:787] kubelet initialised
	I0907 00:14:49.325470   29917 kubeadm.go:788] duration metric: took 5.56597ms waiting for restarted kubelet to initialise ...
	I0907 00:14:49.325478   29917 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:14:49.325542   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:14:49.325553   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.325563   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.325574   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.328938   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:49.328953   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.328959   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.328967   29917 round_trippers.go:580]     Audit-Id: 036f0b78-7b23-43e2-b31d-d2263b3ef121
	I0907 00:14:49.328977   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.328984   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.328992   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.329009   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.329690   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83210 chars]
	I0907 00:14:49.332188   29917 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.332267   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:49.332281   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.332291   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.332302   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.334890   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:49.334907   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.334916   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.334925   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.334933   29917 round_trippers.go:580]     Audit-Id: 33a746cc-ab77-499a-a6ef-ff8d99de1770
	I0907 00:14:49.334947   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.334961   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.334969   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.335117   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:49.335538   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:49.335550   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.335557   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.335563   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.339966   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:14:49.339987   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.339996   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.340004   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.340013   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.340020   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.340029   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.340037   29917 round_trippers.go:580]     Audit-Id: 66c8bb6c-f729-4e82-86a1-da90bedbc551
	I0907 00:14:49.340754   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:49.341056   29917 pod_ready.go:97] node "multinode-816061" hosting pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.341074   29917 pod_ready.go:81] duration metric: took 8.866717ms waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:49.341084   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.341096   29917 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.341158   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:49.341168   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.341180   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.341194   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.343174   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:49.343189   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.343195   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.343201   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.343206   29917 round_trippers.go:580]     Audit-Id: a2156c1a-9532-443f-8dab-da9e9916324c
	I0907 00:14:49.343211   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.343217   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.343227   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.343392   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:49.343782   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:49.343797   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.343807   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.343815   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.345486   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:49.345499   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.345505   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.345511   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.345516   29917 round_trippers.go:580]     Audit-Id: 82e031cb-8dcd-40b4-9d0a-997dcb719bc1
	I0907 00:14:49.345521   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.345527   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.345534   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.345825   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:49.346114   29917 pod_ready.go:97] node "multinode-816061" hosting pod "etcd-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.346129   29917 pod_ready.go:81] duration metric: took 5.021077ms waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:49.346135   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "etcd-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.346155   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.346210   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:14:49.346217   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.346224   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.346235   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.348023   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:49.348040   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.348049   29917 round_trippers.go:580]     Audit-Id: b94e213d-02f2-4543-8787-c9fcadfe9cf0
	I0907 00:14:49.348057   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.348066   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.348073   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.348081   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.348090   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.348192   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"821","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0907 00:14:49.348582   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:49.348597   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.348607   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.348617   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.350474   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:49.350487   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.350494   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.350499   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.350504   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.350509   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.350514   29917 round_trippers.go:580]     Audit-Id: d73afed8-52a5-4691-acf8-e1a355a04a48
	I0907 00:14:49.350519   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.351084   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:49.351383   29917 pod_ready.go:97] node "multinode-816061" hosting pod "kube-apiserver-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.351398   29917 pod_ready.go:81] duration metric: took 5.232384ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:49.351404   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "kube-apiserver-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.351410   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.351448   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:14:49.351455   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.351461   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.351467   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.354581   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:49.354597   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.354606   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.354614   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.354622   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.354630   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.354638   29917 round_trippers.go:580]     Audit-Id: 01a416c8-9582-4617-b3c3-dec7fb50ffeb
	I0907 00:14:49.354648   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.354890   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"822","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0907 00:14:49.386509   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:49.386542   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.386555   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.386564   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.389297   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:49.389319   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.389332   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.389343   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.389350   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.389358   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.389372   29917 round_trippers.go:580]     Audit-Id: 44b9a962-d702-4efe-9398-0a175c628f94
	I0907 00:14:49.389384   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.389518   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:49.389827   29917 pod_ready.go:97] node "multinode-816061" hosting pod "kube-controller-manager-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.389851   29917 pod_ready.go:81] duration metric: took 38.433816ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:49.389863   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "kube-controller-manager-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:49.389873   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.586249   29917 request.go:629] Waited for 196.296558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:14:49.586321   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:14:49.586326   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.586333   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.586339   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.589203   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:49.589226   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.589241   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.589249   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.589256   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.589265   29917 round_trippers.go:580]     Audit-Id: a45cff86-dec8-4a05-8c96-bbbdcc341e7c
	I0907 00:14:49.589273   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.589283   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.589437   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wswp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d99412b-fc2d-4fce-a7e2-80da3e220e07","resourceVersion":"522","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0907 00:14:49.787213   29917 request.go:629] Waited for 197.375147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:14:49.787276   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:14:49.787283   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.787292   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.787301   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.789673   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:49.789691   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.789698   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.789703   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.789709   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.789714   29917 round_trippers.go:580]     Audit-Id: 3f329122-a585-429f-960e-5cf039bb5058
	I0907 00:14:49.789721   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.789726   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.789857   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"754","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0907 00:14:49.790087   29917 pod_ready.go:92] pod "kube-proxy-2wswp" in "kube-system" namespace has status "Ready":"True"
	I0907 00:14:49.790099   29917 pod_ready.go:81] duration metric: took 400.211122ms waiting for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.790108   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:49.986568   29917 request.go:629] Waited for 196.395679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:14:49.986625   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:14:49.986629   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:49.986637   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:49.986643   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:49.989534   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:49.989555   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:49.989564   29917 round_trippers.go:580]     Audit-Id: b9aa34f4-7da8-4539-95ab-b90235dc560d
	I0907 00:14:49.989578   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:49.989586   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:49.989593   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:49.989600   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:49.989608   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:49 GMT
	I0907 00:14:49.989959   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"735","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0907 00:14:50.186818   29917 request.go:629] Waited for 196.40537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:14:50.186870   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:14:50.186875   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:50.186883   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:50.186892   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:50.189230   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:50.189255   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:50.189268   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:50.189278   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:50.189285   29917 round_trippers.go:580]     Audit-Id: b2c24a8a-45fa-41b0-bf96-0a086dfcb331
	I0907 00:14:50.189293   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:50.189302   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:50.189314   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:50.189744   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"92bc42a5-722e-482e-9f35-19fa4d9a6485","resourceVersion":"758","creationTimestamp":"2023-09-07T00:06:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0907 00:14:50.189990   29917 pod_ready.go:92] pod "kube-proxy-dlt4x" in "kube-system" namespace has status "Ready":"True"
	I0907 00:14:50.190002   29917 pod_ready.go:81] duration metric: took 399.890402ms waiting for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:50.190011   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:50.386545   29917 request.go:629] Waited for 196.48432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:14:50.386620   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:14:50.386625   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:50.386635   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:50.386641   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:50.389055   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:50.389074   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:50.389084   29917 round_trippers.go:580]     Audit-Id: 73572f65-7a76-4ba7-be02-f1383996fbe3
	I0907 00:14:50.389093   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:50.389154   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:50.389176   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:50.389186   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:50.389199   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:50.389368   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"846","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:14:50.587286   29917 request.go:629] Waited for 197.449099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:50.587358   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:50.587364   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:50.587374   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:50.587384   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:50.589611   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:50.589635   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:50.589660   29917 round_trippers.go:580]     Audit-Id: f2e939fa-8e14-4530-9a07-35fbbecf7a75
	I0907 00:14:50.589666   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:50.589673   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:50.589681   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:50.589693   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:50.589702   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:50.589859   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:50.590174   29917 pod_ready.go:97] node "multinode-816061" hosting pod "kube-proxy-tbzlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:50.590193   29917 pod_ready.go:81] duration metric: took 400.175423ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:50.590203   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "kube-proxy-tbzlv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:50.590214   29917 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:50.786658   29917 request.go:629] Waited for 196.371035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:14:50.786707   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:14:50.786712   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:50.786719   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:50.786726   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:50.790001   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:50.790024   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:50.790038   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:50.790046   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:50.790053   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:50.790062   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:50.790071   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:50.790081   29917 round_trippers.go:580]     Audit-Id: 0664e6a7-5d97-4730-a467-8761bad57c6c
	I0907 00:14:50.790172   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"825","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0907 00:14:50.986897   29917 request.go:629] Waited for 196.359766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:50.986958   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:50.986962   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:50.986970   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:50.986976   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:50.990023   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:50.990044   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:50.990054   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:50.990062   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:50.990069   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:50.990077   29917 round_trippers.go:580]     Audit-Id: 467b5a48-2da6-49d8-b292-9968c7c6d52c
	I0907 00:14:50.990084   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:50.990093   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:50.990253   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:50.990532   29917 pod_ready.go:97] node "multinode-816061" hosting pod "kube-scheduler-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:50.990557   29917 pod_ready.go:81] duration metric: took 400.336301ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	E0907 00:14:50.990566   29917 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-816061" hosting pod "kube-scheduler-multinode-816061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-816061" has status "Ready":"False"
	I0907 00:14:50.990575   29917 pod_ready.go:38] duration metric: took 1.665086909s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:14:50.990605   29917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:14:51.004650   29917 command_runner.go:130] > -16
	I0907 00:14:51.004786   29917 ops.go:34] apiserver oom_adj: -16
	I0907 00:14:51.004802   29917 kubeadm.go:640] restartCluster took 22.76122669s
	I0907 00:14:51.004811   29917 kubeadm.go:406] StartCluster complete in 22.813914047s
	I0907 00:14:51.004830   29917 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:14:51.004912   29917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:14:51.005533   29917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:14:51.005775   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:14:51.005813   29917 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:14:51.008831   29917 out.go:177] * Enabled addons: 
	I0907 00:14:51.006015   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:14:51.006082   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:14:51.010317   29917 addons.go:502] enable addons completed in 4.510882ms: enabled=[]
	I0907 00:14:51.010538   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:14:51.010842   29917 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:14:51.010853   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.010861   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.010901   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.017169   29917 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0907 00:14:51.017186   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.017193   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.017199   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.017204   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.017209   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.017214   29917 round_trippers.go:580]     Content-Length: 291
	I0907 00:14:51.017219   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:50 GMT
	I0907 00:14:51.017225   29917 round_trippers.go:580]     Audit-Id: 17373495-bcf7-486d-8489-6f5bbbfe557f
	I0907 00:14:51.017286   29917 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"859","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0907 00:14:51.017433   29917 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-816061" context rescaled to 1 replicas
	I0907 00:14:51.017459   29917 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:14:51.019101   29917 out.go:177] * Verifying Kubernetes components...
	I0907 00:14:51.020430   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:14:51.112227   29917 command_runner.go:130] > apiVersion: v1
	I0907 00:14:51.112250   29917 command_runner.go:130] > data:
	I0907 00:14:51.112254   29917 command_runner.go:130] >   Corefile: |
	I0907 00:14:51.112258   29917 command_runner.go:130] >     .:53 {
	I0907 00:14:51.112262   29917 command_runner.go:130] >         log
	I0907 00:14:51.112267   29917 command_runner.go:130] >         errors
	I0907 00:14:51.112271   29917 command_runner.go:130] >         health {
	I0907 00:14:51.112276   29917 command_runner.go:130] >            lameduck 5s
	I0907 00:14:51.112280   29917 command_runner.go:130] >         }
	I0907 00:14:51.112284   29917 command_runner.go:130] >         ready
	I0907 00:14:51.112289   29917 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0907 00:14:51.112294   29917 command_runner.go:130] >            pods insecure
	I0907 00:14:51.112299   29917 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0907 00:14:51.112306   29917 command_runner.go:130] >            ttl 30
	I0907 00:14:51.112310   29917 command_runner.go:130] >         }
	I0907 00:14:51.112316   29917 command_runner.go:130] >         prometheus :9153
	I0907 00:14:51.112321   29917 command_runner.go:130] >         hosts {
	I0907 00:14:51.112327   29917 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0907 00:14:51.112332   29917 command_runner.go:130] >            fallthrough
	I0907 00:14:51.112336   29917 command_runner.go:130] >         }
	I0907 00:14:51.112341   29917 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0907 00:14:51.112347   29917 command_runner.go:130] >            max_concurrent 1000
	I0907 00:14:51.112351   29917 command_runner.go:130] >         }
	I0907 00:14:51.112357   29917 command_runner.go:130] >         cache 30
	I0907 00:14:51.112362   29917 command_runner.go:130] >         loop
	I0907 00:14:51.112368   29917 command_runner.go:130] >         reload
	I0907 00:14:51.112372   29917 command_runner.go:130] >         loadbalance
	I0907 00:14:51.112377   29917 command_runner.go:130] >     }
	I0907 00:14:51.112382   29917 command_runner.go:130] > kind: ConfigMap
	I0907 00:14:51.112391   29917 command_runner.go:130] > metadata:
	I0907 00:14:51.112399   29917 command_runner.go:130] >   creationTimestamp: "2023-09-07T00:04:04Z"
	I0907 00:14:51.112405   29917 command_runner.go:130] >   name: coredns
	I0907 00:14:51.112415   29917 command_runner.go:130] >   namespace: kube-system
	I0907 00:14:51.112423   29917 command_runner.go:130] >   resourceVersion: "401"
	I0907 00:14:51.112440   29917 command_runner.go:130] >   uid: ecb72cf6-2a6c-419e-8770-8a9176c286a3
	I0907 00:14:51.112515   29917 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:14:51.112525   29917 node_ready.go:35] waiting up to 6m0s for node "multinode-816061" to be "Ready" ...
	I0907 00:14:51.186858   29917 request.go:629] Waited for 74.257143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.186924   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.186931   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.186940   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.186949   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.189745   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:51.189775   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.189784   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.189792   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.189800   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:51 GMT
	I0907 00:14:51.189807   29917 round_trippers.go:580]     Audit-Id: 3ab1f230-1bf0-4de1-ab29-dd34c37cae98
	I0907 00:14:51.189814   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.189822   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.191019   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"762","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0907 00:14:51.386815   29917 request.go:629] Waited for 195.387182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.386869   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.386874   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.386882   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.386888   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.389666   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:51.389687   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.389696   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.389705   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.389713   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:51 GMT
	I0907 00:14:51.389721   29917 round_trippers.go:580]     Audit-Id: 0cc9951e-1c79-4ffd-843a-7786c75d66ad
	I0907 00:14:51.389729   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.389738   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.389884   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:51.390202   29917 node_ready.go:49] node "multinode-816061" has status "Ready":"True"
	I0907 00:14:51.390217   29917 node_ready.go:38] duration metric: took 277.676478ms waiting for node "multinode-816061" to be "Ready" ...
	I0907 00:14:51.390229   29917 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:14:51.586683   29917 request.go:629] Waited for 196.374528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:14:51.586748   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:14:51.586755   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.586764   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.586773   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.593763   29917 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0907 00:14:51.593793   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.593801   29917 round_trippers.go:580]     Audit-Id: 04f7c59b-3f50-404c-973d-e2277017886c
	I0907 00:14:51.593807   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.593812   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.593822   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.593828   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.593834   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:51 GMT
	I0907 00:14:51.595220   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"877"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0907 00:14:51.597710   29917 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:51.787179   29917 request.go:629] Waited for 189.396054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:51.787263   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:51.787271   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.787281   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.787292   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.789998   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:51.790018   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.790026   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.790034   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.790042   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.790050   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:51 GMT
	I0907 00:14:51.790057   29917 round_trippers.go:580]     Audit-Id: d4c834ed-3fe9-49c5-b325-7a39baf0cdf3
	I0907 00:14:51.790065   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.790273   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:51.986213   29917 request.go:629] Waited for 195.328863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.986310   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:51.986317   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:51.986329   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:51.986341   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:51.989156   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:51.989174   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:51.989195   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:51.989204   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:51 GMT
	I0907 00:14:51.989211   29917 round_trippers.go:580]     Audit-Id: f543433b-2e88-4c62-8fa4-21808c37296a
	I0907 00:14:51.989219   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:51.989227   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:51.989240   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:51.989731   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:52.186401   29917 request.go:629] Waited for 196.292893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:52.186498   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:52.186510   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:52.186537   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:52.186551   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:52.189672   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:52.189692   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:52.189699   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:52 GMT
	I0907 00:14:52.189705   29917 round_trippers.go:580]     Audit-Id: d114e731-51d4-4f84-b7ef-807a825d9581
	I0907 00:14:52.189710   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:52.189715   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:52.189729   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:52.189735   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:52.190099   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:52.386648   29917 request.go:629] Waited for 196.105352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:52.386691   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:52.386697   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:52.386706   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:52.386715   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:52.389611   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:52.389636   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:52.389646   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:52 GMT
	I0907 00:14:52.389654   29917 round_trippers.go:580]     Audit-Id: b0191a09-327d-4aac-beb8-00603f28b001
	I0907 00:14:52.389662   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:52.389672   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:52.389680   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:52.389688   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:52.390026   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:52.891117   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:52.891139   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:52.891147   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:52.891153   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:52.894174   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:52.894194   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:52.894203   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:52.894212   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:52.894219   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:52.894227   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:52.894235   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:52 GMT
	I0907 00:14:52.894245   29917 round_trippers.go:580]     Audit-Id: af0a2124-a06f-463a-af5a-0d47cc01ec01
	I0907 00:14:52.894848   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:52.895341   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:52.895355   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:52.895365   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:52.895374   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:52.897639   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:52.897653   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:52.897659   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:52.897665   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:52.897670   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:52 GMT
	I0907 00:14:52.897675   29917 round_trippers.go:580]     Audit-Id: 25a22c51-6efe-44f3-ab7e-30acb2e7221b
	I0907 00:14:52.897680   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:52.897685   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:52.897833   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:53.390478   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:53.390502   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:53.390513   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:53.390519   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:53.393625   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:53.393651   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:53.393663   29917 round_trippers.go:580]     Audit-Id: 7c945870-3eae-4a40-a1bf-95b64b7d9f8b
	I0907 00:14:53.393672   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:53.393679   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:53.393687   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:53.393695   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:53.393704   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:53 GMT
	I0907 00:14:53.393987   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:53.394466   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:53.394479   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:53.394487   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:53.394492   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:53.396834   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:53.396857   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:53.396868   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:53.396877   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:53 GMT
	I0907 00:14:53.396891   29917 round_trippers.go:580]     Audit-Id: e48c7998-f1d8-4725-8b4e-8b9dba5b2d97
	I0907 00:14:53.396899   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:53.396920   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:53.396932   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:53.397155   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:53.890796   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:53.890816   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:53.890825   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:53.890831   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:53.893957   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:53.893982   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:53.893993   29917 round_trippers.go:580]     Audit-Id: eab41d68-305f-4f13-b830-a13cc4c50bd2
	I0907 00:14:53.894000   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:53.894006   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:53.894015   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:53.894021   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:53.894028   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:53 GMT
	I0907 00:14:53.894199   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:53.894747   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:53.894762   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:53.894769   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:53.894792   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:53.897251   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:53.897266   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:53.897276   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:53 GMT
	I0907 00:14:53.897284   29917 round_trippers.go:580]     Audit-Id: dba9b067-b300-450e-9214-06306082acd0
	I0907 00:14:53.897293   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:53.897302   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:53.897315   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:53.897327   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:53.897946   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:53.898357   29917 pod_ready.go:102] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"False"
	I0907 00:14:54.390627   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:54.390650   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:54.390658   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:54.390664   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:54.393486   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:54.393509   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:54.393520   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:54.393529   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:54.393538   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:54.393547   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:54.393556   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:54 GMT
	I0907 00:14:54.393562   29917 round_trippers.go:580]     Audit-Id: 430ac19c-6f6a-4d31-8470-06deb72fdd9f
	I0907 00:14:54.393904   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:54.394339   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:54.394352   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:54.394360   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:54.394365   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:54.396899   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:54.396912   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:54.396919   29917 round_trippers.go:580]     Audit-Id: 98e8b87c-a733-4ecd-9cf1-7aae6dca3d9e
	I0907 00:14:54.396924   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:54.396930   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:54.396935   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:54.396940   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:54.396945   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:54 GMT
	I0907 00:14:54.397358   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:54.890980   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:54.891005   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:54.891016   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:54.891024   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:54.894352   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:54.894369   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:54.894375   29917 round_trippers.go:580]     Audit-Id: e391a23e-b0f8-40f4-ba16-dd99f4155101
	I0907 00:14:54.894381   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:54.894389   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:54.894396   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:54.894404   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:54.894412   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:54 GMT
	I0907 00:14:54.895032   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:54.895452   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:54.895466   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:54.895477   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:54.895486   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:54.897946   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:54.897961   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:54.897967   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:54.897972   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:54.897978   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:54.897987   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:54.898010   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:54 GMT
	I0907 00:14:54.898023   29917 round_trippers.go:580]     Audit-Id: 55e18e03-f23f-453d-a4a3-942e7f7bef21
	I0907 00:14:54.898231   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:55.391098   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:55.391116   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.391125   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.391131   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.394499   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:55.394522   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.394533   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.394541   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.394549   29917 round_trippers.go:580]     Audit-Id: 74918d17-34d7-43d9-91e2-756aefd35c38
	I0907 00:14:55.394556   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.394565   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.394576   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.395321   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"828","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0907 00:14:55.395842   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:55.395859   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.395866   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.395877   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.398099   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:55.398121   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.398137   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.398146   29917 round_trippers.go:580]     Audit-Id: f62fe0ef-8158-4be2-8f84-7da719b187d7
	I0907 00:14:55.398155   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.398163   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.398171   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.398188   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.398317   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:55.890968   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:14:55.891003   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.891011   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.891018   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.893953   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:55.893969   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.893976   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.893982   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.893988   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.893994   29917 round_trippers.go:580]     Audit-Id: 05e44df9-72f0-4126-9a33-30bd3b0ab309
	I0907 00:14:55.894000   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.894008   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.894962   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0907 00:14:55.895551   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:55.895572   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.895582   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.895591   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.897953   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:55.897967   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.897973   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.897982   29917 round_trippers.go:580]     Audit-Id: 980e3da7-4670-45f6-96e4-d17f55daf3c5
	I0907 00:14:55.897997   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.898007   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.898013   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.898018   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.898270   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:55.898575   29917 pod_ready.go:92] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"True"
	I0907 00:14:55.898591   29917 pod_ready.go:81] duration metric: took 4.30085968s waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:55.898602   29917 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:14:55.898652   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:55.898661   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.898672   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.898683   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.900988   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:55.901005   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.901012   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.901018   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.901024   29917 round_trippers.go:580]     Audit-Id: 31148d26-5e97-4091-b8c3-b333f41f2408
	I0907 00:14:55.901033   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.901047   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.901056   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.901296   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:55.901640   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:55.901651   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.901658   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.901664   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.903549   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:55.903565   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.903574   29917 round_trippers.go:580]     Audit-Id: bee59fee-46d0-4bf0-8792-6cf456a93beb
	I0907 00:14:55.903591   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.903599   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.903607   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.903618   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.903625   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.903827   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:55.904158   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:55.904170   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.904177   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.904183   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.905925   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:14:55.905938   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.905945   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.905951   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.905956   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.905967   29917 round_trippers.go:580]     Audit-Id: 3db6a6b5-377b-4ffd-bd71-8867972d6744
	I0907 00:14:55.905979   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.905991   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.906220   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:55.986867   29917 request.go:629] Waited for 80.248827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:55.986933   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:55.986939   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:55.986952   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:55.986959   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:55.990413   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:55.990433   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:55.990440   29917 round_trippers.go:580]     Audit-Id: 7fed646b-54ce-4b53-9f8f-9c6191ab337a
	I0907 00:14:55.990445   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:55.990451   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:55.990456   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:55.990462   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:55.990468   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:55 GMT
	I0907 00:14:55.990612   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:56.491736   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:56.491757   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:56.491766   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:56.491772   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:56.494575   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:56.494598   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:56.494606   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:56.494611   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:56.494621   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:56.494630   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:56 GMT
	I0907 00:14:56.494638   29917 round_trippers.go:580]     Audit-Id: 649bb6a6-dab2-46cf-aabc-863eb56b7768
	I0907 00:14:56.494649   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:56.495069   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:56.495467   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:56.495479   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:56.495486   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:56.495491   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:56.498443   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:56.498463   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:56.498474   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:56.498484   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:56.498492   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:56.498502   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:56.498512   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:56 GMT
	I0907 00:14:56.498526   29917 round_trippers.go:580]     Audit-Id: bd233899-1d5e-4b00-83a2-49f3087f3012
	I0907 00:14:56.498668   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:56.991324   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:56.991352   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:56.991366   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:56.991375   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:56.994877   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:56.994901   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:56.994912   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:56.994921   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:56.994930   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:56 GMT
	I0907 00:14:56.994939   29917 round_trippers.go:580]     Audit-Id: 08f03c06-9a15-415d-a2d2-4db50ba5c8ae
	I0907 00:14:56.994950   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:56.994961   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:56.995224   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:56.995583   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:56.995606   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:56.995616   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:56.995626   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:56.997671   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:56.997687   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:56.997693   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:56.997699   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:56 GMT
	I0907 00:14:56.997743   29917 round_trippers.go:580]     Audit-Id: 0dcfe656-1063-49fb-9baa-4e2713ba5290
	I0907 00:14:56.997771   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:56.997781   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:56.997790   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:56.998027   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:57.491728   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:57.491751   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:57.491765   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:57.491774   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:57.495457   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:57.495479   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:57.495491   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:57.495500   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:57.495510   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:57 GMT
	I0907 00:14:57.495515   29917 round_trippers.go:580]     Audit-Id: 964fdfd4-9410-4d94-b6d2-fb53241d322e
	I0907 00:14:57.495521   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:57.495526   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:57.496277   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:57.496730   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:57.496745   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:57.496756   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:57.496767   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:57.500522   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:57.500539   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:57.500548   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:57 GMT
	I0907 00:14:57.500557   29917 round_trippers.go:580]     Audit-Id: 4d6443b0-2399-4a30-b873-0ef22289a5f0
	I0907 00:14:57.500565   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:57.500573   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:57.500586   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:57.500594   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:57.500932   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:57.991554   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:57.991580   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:57.991609   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:57.991618   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:57.995659   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:14:57.995681   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:57.995687   29917 round_trippers.go:580]     Audit-Id: 09e8e1cb-3242-4679-bb5f-7e93bbd772d0
	I0907 00:14:57.995693   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:57.995699   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:57.995704   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:57.995709   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:57.995714   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:57 GMT
	I0907 00:14:57.995873   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:57.996265   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:57.996279   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:57.996290   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:57.996298   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:57.998804   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:57.998820   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:57.998826   29917 round_trippers.go:580]     Audit-Id: 624c8fe4-0d96-40b7-a4fb-7f920c14ee8f
	I0907 00:14:57.998832   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:57.998837   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:57.998843   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:57.998848   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:57.998854   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:57 GMT
	I0907 00:14:57.998968   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:57.999264   29917 pod_ready.go:102] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"False"
	I0907 00:14:58.492034   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:58.492055   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:58.492063   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:58.492069   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:58.500515   29917 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0907 00:14:58.500547   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:58.500557   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:58 GMT
	I0907 00:14:58.500564   29917 round_trippers.go:580]     Audit-Id: ef56db9b-c52c-46b9-990e-6d8954d77f9a
	I0907 00:14:58.500571   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:58.500579   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:58.500587   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:58.500596   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:58.500741   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:58.501153   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:58.501165   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:58.501172   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:58.501178   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:58.507069   29917 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:14:58.507089   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:58.507099   29917 round_trippers.go:580]     Audit-Id: 81c22a47-2a2b-4f4d-bc7c-7269bf98b0a6
	I0907 00:14:58.507107   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:58.507114   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:58.507122   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:58.507135   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:58.507146   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:58 GMT
	I0907 00:14:58.507354   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:58.992031   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:58.992057   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:58.992065   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:58.992071   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:58.997888   29917 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:14:58.997916   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:58.997928   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:58.997936   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:58.997944   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:58.997952   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:58.997959   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:58 GMT
	I0907 00:14:58.997973   29917 round_trippers.go:580]     Audit-Id: 00520241-ce9d-49d4-9ff3-1ab72f89b3e1
	I0907 00:14:58.998829   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:58.999232   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:58.999245   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:58.999256   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:58.999265   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:59.001978   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:59.001995   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:59.002002   29917 round_trippers.go:580]     Audit-Id: 3ffc8f21-e178-4369-a259-130c501eb204
	I0907 00:14:59.002009   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:59.002015   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:59.002020   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:59.002025   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:59.002038   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:58 GMT
	I0907 00:14:59.003014   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:59.491753   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:59.491779   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:59.491790   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:59.491799   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:59.494848   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:14:59.494868   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:59.494875   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:59.494881   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:59.494886   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:59 GMT
	I0907 00:14:59.494892   29917 round_trippers.go:580]     Audit-Id: 60de0d9d-1872-4ab3-b11a-eb5c0cfb8c6c
	I0907 00:14:59.494904   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:59.494917   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:59.495097   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:59.495480   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:59.495492   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:59.495503   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:59.495512   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:59.497615   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:59.497628   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:59.497634   29917 round_trippers.go:580]     Audit-Id: ddf20b49-3c92-4c9e-a755-0899759473eb
	I0907 00:14:59.497643   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:59.497649   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:59.497662   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:59.497674   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:59.497684   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:59 GMT
	I0907 00:14:59.497882   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:14:59.991501   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:14:59.991525   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:59.991536   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:59.991545   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:59.994236   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:59.994257   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:59.994264   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:59 GMT
	I0907 00:14:59.994270   29917 round_trippers.go:580]     Audit-Id: 1abb8c28-4f7e-4562-934c-6ee7d11d9bb8
	I0907 00:14:59.994275   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:59.994304   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:59.994316   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:59.994325   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:59.994788   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:14:59.995170   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:14:59.995183   29917 round_trippers.go:469] Request Headers:
	I0907 00:14:59.995193   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:14:59.995202   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:14:59.997568   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:14:59.997584   29917 round_trippers.go:577] Response Headers:
	I0907 00:14:59.997591   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:14:59.997596   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:14:59 GMT
	I0907 00:14:59.997602   29917 round_trippers.go:580]     Audit-Id: 873e61c3-1d5c-42b6-bb75-ee6694250b32
	I0907 00:14:59.997615   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:14:59.997631   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:14:59.997639   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:14:59.997746   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:00.491896   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:15:00.491974   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.491991   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.492003   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:00.495874   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:15:00.495892   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:00.495898   29917 round_trippers.go:580]     Audit-Id: 5c50ac34-4ce3-469a-8315-1378d85f2b51
	I0907 00:15:00.495904   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:00.495911   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:00.495919   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:00.495934   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:00.495943   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:00.496929   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"820","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0907 00:15:00.497319   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:00.497332   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.497339   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.497346   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:00.499638   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:00.499654   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:00.499663   29917 round_trippers.go:580]     Audit-Id: fb205a7a-f270-42a9-99c9-8161bbe98429
	I0907 00:15:00.499672   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:00.499681   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:00.499696   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:00.499705   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:00.499718   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:00.499874   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:00.500260   29917 pod_ready.go:102] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"False"
	I0907 00:15:00.991466   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:15:00.991485   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.991496   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.991505   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:00.993841   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:00.993863   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:00.993873   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:00.993881   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:00.993888   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:00.993898   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:00.993907   29917 round_trippers.go:580]     Audit-Id: 08f61c91-bc42-4e0b-a097-7c036da25694
	I0907 00:15:00.993915   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:00.994061   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"910","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0907 00:15:00.994553   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:00.994567   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.994575   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.994580   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:00.996728   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:00.996744   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:00.996754   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:00.996763   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:00.996772   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:00.996782   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:00.996796   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:00.996807   29917 round_trippers.go:580]     Audit-Id: 2f6c4055-60ec-40e3-8aa9-e17d135508f9
	I0907 00:15:00.996966   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:00.997336   29917 pod_ready.go:92] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:00.997351   29917 pod_ready.go:81] duration metric: took 5.098743221s waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:00.997369   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:00.997415   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:15:00.997422   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.997429   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.997435   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:00.999263   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:15:00.999276   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:00.999285   29917 round_trippers.go:580]     Audit-Id: a54a9b5c-4c6e-4556-932a-f2da7e409b3d
	I0907 00:15:00.999295   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:00.999306   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:00.999319   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:00.999328   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:00.999341   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:00.999518   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"880","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0907 00:15:00.999921   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:00.999935   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:00.999942   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:00.999948   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.001918   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:15:01.001931   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.001940   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:01.001949   29917 round_trippers.go:580]     Audit-Id: 277f53d0-e2cb-4fe2-88f5-7125c6364226
	I0907 00:15:01.001960   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.001974   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.001983   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.001996   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.002250   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:01.002617   29917 pod_ready.go:92] pod "kube-apiserver-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:01.002638   29917 pod_ready.go:81] duration metric: took 5.259676ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.002650   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.002715   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:15:01.002725   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.002735   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.002748   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.004591   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:15:01.004605   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.004615   29917 round_trippers.go:580]     Audit-Id: edb36394-cd22-4dad-ae04-a15388824033
	I0907 00:15:01.004623   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.004633   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.004643   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.004653   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.004664   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:00 GMT
	I0907 00:15:01.004939   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"889","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0907 00:15:01.005312   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:01.005327   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.005334   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.005343   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.007435   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:01.007452   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.007461   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.007471   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.007483   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.007493   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.007508   29917 round_trippers.go:580]     Audit-Id: 88b9eb4f-1ea7-49cc-9bb7-4496620dc193
	I0907 00:15:01.007517   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.007813   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:01.008169   29917 pod_ready.go:92] pod "kube-controller-manager-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:01.008183   29917 pod_ready.go:81] duration metric: took 5.521667ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.008196   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.008250   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:15:01.008259   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.008270   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.008284   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.010176   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:15:01.010195   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.010203   29917 round_trippers.go:580]     Audit-Id: 72312302-bfec-42f5-81d4-922fcadc606f
	I0907 00:15:01.010211   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.010218   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.010227   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.010235   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.010243   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.010487   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wswp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d99412b-fc2d-4fce-a7e2-80da3e220e07","resourceVersion":"522","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0907 00:15:01.187244   29917 request.go:629] Waited for 176.420376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:15:01.187296   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:15:01.187300   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.187316   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.187324   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.190169   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:01.190189   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.190196   29917 round_trippers.go:580]     Audit-Id: 68fc7ce0-fe28-4d3d-8eec-060e11c7150e
	I0907 00:15:01.190202   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.190208   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.190213   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.190218   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.190224   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.190639   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"20b50f58-79b7-44b5-afb8-797975c71f82","resourceVersion":"754","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0907 00:15:01.191081   29917 pod_ready.go:92] pod "kube-proxy-2wswp" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:01.191100   29917 pod_ready.go:81] duration metric: took 182.896186ms waiting for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.191111   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.386543   29917 request.go:629] Waited for 195.364941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:15:01.386603   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:15:01.386608   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.386618   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.386624   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.389462   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:01.389488   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.389499   29917 round_trippers.go:580]     Audit-Id: 21c78383-8f70-4d08-81eb-bb38500f281c
	I0907 00:15:01.389507   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.389516   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.389525   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.389533   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.389541   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.389706   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"735","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0907 00:15:01.586458   29917 request.go:629] Waited for 196.324919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:15:01.586548   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:15:01.586556   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.586563   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.586569   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.589869   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:15:01.589894   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.589904   29917 round_trippers.go:580]     Audit-Id: a632e43f-2fde-4fa9-9e7d-b0689e05cb18
	I0907 00:15:01.589913   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.589923   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.589930   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.589935   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.589941   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.590015   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"92bc42a5-722e-482e-9f35-19fa4d9a6485","resourceVersion":"903","creationTimestamp":"2023-09-07T00:06:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0907 00:15:01.590274   29917 pod_ready.go:92] pod "kube-proxy-dlt4x" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:01.590288   29917 pod_ready.go:81] duration metric: took 399.163192ms waiting for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.590297   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.786819   29917 request.go:629] Waited for 196.43623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:15:01.786873   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:15:01.786878   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.786886   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.786892   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.789519   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:01.789539   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.789546   29917 round_trippers.go:580]     Audit-Id: 850512e7-e32a-4919-afec-195ca4874f0e
	I0907 00:15:01.789552   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.789559   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.789568   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.789578   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.789590   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.790003   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"846","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:15:01.986856   29917 request.go:629] Waited for 196.453698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:01.986928   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:01.986933   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:01.986940   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:01.986956   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:01.990009   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:15:01.990028   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:01.990035   29917 round_trippers.go:580]     Audit-Id: 5ed1f559-8e9f-4c5f-bc5e-e81b36d7efbe
	I0907 00:15:01.990041   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:01.990046   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:01.990051   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:01.990057   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:01.990062   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:01 GMT
	I0907 00:15:01.990608   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:01.990926   29917 pod_ready.go:92] pod "kube-proxy-tbzlv" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:01.990939   29917 pod_ready.go:81] duration metric: took 400.637819ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:01.990949   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:02.186316   29917 request.go:629] Waited for 195.300616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:15:02.186369   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:15:02.186374   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.186382   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.186388   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.189249   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:02.189266   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.189273   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.189284   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.189292   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.189303   29917 round_trippers.go:580]     Audit-Id: 018b4cca-48f9-4454-ab56-0327eaf219d6
	I0907 00:15:02.189315   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.189327   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.189531   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"881","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0907 00:15:02.387218   29917 request.go:629] Waited for 197.342393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:02.387284   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:15:02.387289   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.387296   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.387306   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.389907   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:02.389934   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.389942   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.389948   29917 round_trippers.go:580]     Audit-Id: 4a067f85-89db-4e89-b02c-379f761920bd
	I0907 00:15:02.389953   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.389959   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.389965   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.389973   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.390162   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0907 00:15:02.390450   29917 pod_ready.go:92] pod "kube-scheduler-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:15:02.390463   29917 pod_ready.go:81] duration metric: took 399.503634ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:15:02.390472   29917 pod_ready.go:38] duration metric: took 11.000232534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:15:02.390487   29917 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:15:02.390528   29917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:15:02.404396   29917 command_runner.go:130] > 1098
	I0907 00:15:02.404426   29917 api_server.go:72] duration metric: took 11.38694907s to wait for apiserver process to appear ...
	I0907 00:15:02.404434   29917 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:15:02.404448   29917 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:15:02.409514   29917 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0907 00:15:02.409580   29917 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I0907 00:15:02.409596   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.409632   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.409659   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.411002   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:15:02.411029   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.411039   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.411064   29917 round_trippers.go:580]     Audit-Id: 64665a22-6171-4abf-8bec-704f5f616c30
	I0907 00:15:02.411078   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.411090   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.411097   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.411105   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.411112   29917 round_trippers.go:580]     Content-Length: 263
	I0907 00:15:02.411154   29917 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0907 00:15:02.411192   29917 api_server.go:141] control plane version: v1.28.1
	I0907 00:15:02.411204   29917 api_server.go:131] duration metric: took 6.766126ms to wait for apiserver health ...
	I0907 00:15:02.411212   29917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:15:02.586667   29917 request.go:629] Waited for 175.373883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:15:02.586739   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:15:02.586746   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.586757   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.586764   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.591181   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:15:02.591202   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.591209   29917 round_trippers.go:580]     Audit-Id: 6b517f00-0a5a-4336-b65a-cbef8c50e03e
	I0907 00:15:02.591215   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.591220   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.591225   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.591231   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.591236   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.593032   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"910"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81881 chars]
	I0907 00:15:02.595545   29917 system_pods.go:59] 12 kube-system pods found
	I0907 00:15:02.595566   29917 system_pods.go:61] "coredns-5dd5756b68-8ktxh" [c2574ba0-f19a-40c1-a06f-601bb17661f6] Running
	I0907 00:15:02.595572   29917 system_pods.go:61] "etcd-multinode-816061" [7ff498e1-17ed-4818-befa-68a5a69b96d4] Running
	I0907 00:15:02.595579   29917 system_pods.go:61] "kindnet-9qj9n" [d137582e-041a-4af3-b93e-47e965a488c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:15:02.595586   29917 system_pods.go:61] "kindnet-gdck2" [d6762e3d-d971-416f-a45f-bc08ebcfb175] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:15:02.595593   29917 system_pods.go:61] "kindnet-xgbtc" [137c032b-12d1-4179-8416-0f3cc5733842] Running
	I0907 00:15:02.595601   29917 system_pods.go:61] "kube-apiserver-multinode-816061" [dbbbc2db-98c3-44e3-a18d-947bad7ffda2] Running
	I0907 00:15:02.595608   29917 system_pods.go:61] "kube-controller-manager-multinode-816061" [ea192806-6f42-4471-8e73-ae96aa3bfa06] Running
	I0907 00:15:02.595624   29917 system_pods.go:61] "kube-proxy-2wswp" [4d99412b-fc2d-4fce-a7e2-80da3e220e07] Running
	I0907 00:15:02.595635   29917 system_pods.go:61] "kube-proxy-dlt4x" [2c56690f-de33-49ec-8cad-79fdae731daa] Running
	I0907 00:15:02.595640   29917 system_pods.go:61] "kube-proxy-tbzlv" [6b9717d8-174b-4713-a941-382c81cc659e] Running
	I0907 00:15:02.595648   29917 system_pods.go:61] "kube-scheduler-multinode-816061" [3fa4fad1-c309-42a9-af5f-28e6398492c7] Running
	I0907 00:15:02.595652   29917 system_pods.go:61] "storage-provisioner" [3ce467f7-aaa1-4391-9bc9-39ef0521ebd2] Running
	I0907 00:15:02.595657   29917 system_pods.go:74] duration metric: took 184.441378ms to wait for pod list to return data ...
	I0907 00:15:02.595666   29917 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:15:02.787178   29917 request.go:629] Waited for 191.429711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I0907 00:15:02.787258   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I0907 00:15:02.787264   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.787277   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.787289   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.792537   29917 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:15:02.792561   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.792569   29917 round_trippers.go:580]     Content-Length: 261
	I0907 00:15:02.792575   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.792581   29917 round_trippers.go:580]     Audit-Id: 8058d236-1426-4ce7-902d-b59aa41ef8d7
	I0907 00:15:02.792589   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.792598   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.792608   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.792618   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.792656   29917 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"910"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"41407859-a71f-4f4f-b9db-b147bd408b48","resourceVersion":"353","creationTimestamp":"2023-09-07T00:04:17Z"}}]}
	I0907 00:15:02.792859   29917 default_sa.go:45] found service account: "default"
	I0907 00:15:02.792876   29917 default_sa.go:55] duration metric: took 197.203607ms for default service account to be created ...
	I0907 00:15:02.792885   29917 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:15:02.986236   29917 request.go:629] Waited for 193.288248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:15:02.986304   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:15:02.986309   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:02.986322   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:02.986345   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:02.990277   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:15:02.990302   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:02.990314   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:02.990322   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:02.990328   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:02.990333   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:02 GMT
	I0907 00:15:02.990340   29917 round_trippers.go:580]     Audit-Id: 52efd5ab-3de5-4702-8960-f81f243e1e08
	I0907 00:15:02.990348   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:02.991986   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"910"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81881 chars]
	I0907 00:15:02.995554   29917 system_pods.go:86] 12 kube-system pods found
	I0907 00:15:02.995578   29917 system_pods.go:89] "coredns-5dd5756b68-8ktxh" [c2574ba0-f19a-40c1-a06f-601bb17661f6] Running
	I0907 00:15:02.995592   29917 system_pods.go:89] "etcd-multinode-816061" [7ff498e1-17ed-4818-befa-68a5a69b96d4] Running
	I0907 00:15:02.995603   29917 system_pods.go:89] "kindnet-9qj9n" [d137582e-041a-4af3-b93e-47e965a488c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:15:02.995619   29917 system_pods.go:89] "kindnet-gdck2" [d6762e3d-d971-416f-a45f-bc08ebcfb175] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0907 00:15:02.995625   29917 system_pods.go:89] "kindnet-xgbtc" [137c032b-12d1-4179-8416-0f3cc5733842] Running
	I0907 00:15:02.995633   29917 system_pods.go:89] "kube-apiserver-multinode-816061" [dbbbc2db-98c3-44e3-a18d-947bad7ffda2] Running
	I0907 00:15:02.995644   29917 system_pods.go:89] "kube-controller-manager-multinode-816061" [ea192806-6f42-4471-8e73-ae96aa3bfa06] Running
	I0907 00:15:02.995655   29917 system_pods.go:89] "kube-proxy-2wswp" [4d99412b-fc2d-4fce-a7e2-80da3e220e07] Running
	I0907 00:15:02.995665   29917 system_pods.go:89] "kube-proxy-dlt4x" [2c56690f-de33-49ec-8cad-79fdae731daa] Running
	I0907 00:15:02.995675   29917 system_pods.go:89] "kube-proxy-tbzlv" [6b9717d8-174b-4713-a941-382c81cc659e] Running
	I0907 00:15:02.995684   29917 system_pods.go:89] "kube-scheduler-multinode-816061" [3fa4fad1-c309-42a9-af5f-28e6398492c7] Running
	I0907 00:15:02.995694   29917 system_pods.go:89] "storage-provisioner" [3ce467f7-aaa1-4391-9bc9-39ef0521ebd2] Running
	I0907 00:15:02.995705   29917 system_pods.go:126] duration metric: took 202.814259ms to wait for k8s-apps to be running ...
	I0907 00:15:02.995714   29917 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:15:02.995764   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:15:03.009966   29917 system_svc.go:56] duration metric: took 14.244404ms WaitForService to wait for kubelet.
	I0907 00:15:03.009991   29917 kubeadm.go:581] duration metric: took 11.992511914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:15:03.010016   29917 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:15:03.186357   29917 request.go:629] Waited for 176.281463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I0907 00:15:03.186424   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:15:03.186430   29917 round_trippers.go:469] Request Headers:
	I0907 00:15:03.186440   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:15:03.186448   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:15:03.189162   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:15:03.189188   29917 round_trippers.go:577] Response Headers:
	I0907 00:15:03.189196   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:15:03.189202   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:15:03 GMT
	I0907 00:15:03.189209   29917 round_trippers.go:580]     Audit-Id: c7c2ddf6-4c45-49a5-8737-2a284b042122
	I0907 00:15:03.189219   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:15:03.189224   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:15:03.189230   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:15:03.189588   29917 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"910"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"877","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15075 chars]
	I0907 00:15:03.190367   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:15:03.190388   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:15:03.190400   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:15:03.190407   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:15:03.190414   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:15:03.190421   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:15:03.190425   29917 node_conditions.go:105] duration metric: took 180.405521ms to run NodePressure ...
	I0907 00:15:03.190438   29917 start.go:228] waiting for startup goroutines ...
	I0907 00:15:03.190452   29917 start.go:233] waiting for cluster config update ...
	I0907 00:15:03.190462   29917 start.go:242] writing updated cluster config ...
	I0907 00:15:03.191096   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:15:03.191223   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:15:03.193657   29917 out.go:177] * Starting worker node multinode-816061-m02 in cluster multinode-816061
	I0907 00:15:03.195137   29917 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:15:03.195158   29917 cache.go:57] Caching tarball of preloaded images
	I0907 00:15:03.195254   29917 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:15:03.195265   29917 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:15:03.195360   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:15:03.195518   29917 start.go:365] acquiring machines lock for multinode-816061-m02: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:15:03.195558   29917 start.go:369] acquired machines lock for "multinode-816061-m02" in 21.403µs
	I0907 00:15:03.195570   29917 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:15:03.195575   29917 fix.go:54] fixHost starting: m02
	I0907 00:15:03.195819   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:15:03.195846   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:15:03.210252   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I0907 00:15:03.210660   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:15:03.211180   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:15:03.211202   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:15:03.211560   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:15:03.211736   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:15:03.211910   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetState
	I0907 00:15:03.213460   29917 fix.go:102] recreateIfNeeded on multinode-816061-m02: state=Running err=<nil>
	W0907 00:15:03.213475   29917 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:15:03.215523   29917 out.go:177] * Updating the running kvm2 "multinode-816061-m02" VM ...
	I0907 00:15:03.217049   29917 machine.go:88] provisioning docker machine ...
	I0907 00:15:03.217068   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:15:03.217265   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:15:03.217421   29917 buildroot.go:166] provisioning hostname "multinode-816061-m02"
	I0907 00:15:03.217450   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:15:03.217660   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:15:03.219852   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.220231   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.220258   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.220382   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:15:03.220543   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.220699   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.220842   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:15:03.221003   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:15:03.221379   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:15:03.221392   29917 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061-m02 && echo "multinode-816061-m02" | sudo tee /etc/hostname
	I0907 00:15:03.353493   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-816061-m02
	
	I0907 00:15:03.353523   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:15:03.356349   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.356779   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.356820   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.356950   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:15:03.357124   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.357341   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.357519   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:15:03.357702   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:15:03.358319   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:15:03.358347   29917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-816061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-816061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-816061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:15:03.471761   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:15:03.471791   29917 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:15:03.471810   29917 buildroot.go:174] setting up certificates
	I0907 00:15:03.471820   29917 provision.go:83] configureAuth start
	I0907 00:15:03.471830   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetMachineName
	I0907 00:15:03.472108   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:15:03.474727   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.475111   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.475142   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.475304   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:15:03.477571   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.477957   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.477997   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.478066   29917 provision.go:138] copyHostCerts
	I0907 00:15:03.478090   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:15:03.478115   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:15:03.478124   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:15:03.478186   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:15:03.478257   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:15:03.478275   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:15:03.478278   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:15:03.478299   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:15:03.478345   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:15:03.478360   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:15:03.478366   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:15:03.478384   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:15:03.478427   29917 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.multinode-816061-m02 san=[192.168.39.44 192.168.39.44 localhost 127.0.0.1 minikube multinode-816061-m02]
	I0907 00:15:03.709177   29917 provision.go:172] copyRemoteCerts
	I0907 00:15:03.709227   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:15:03.709248   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:15:03.712032   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.712405   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.712449   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.712616   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:15:03.712819   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.712976   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:15:03.713081   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:15:03.800542   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0907 00:15:03.800605   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:15:03.825845   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0907 00:15:03.825915   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0907 00:15:03.852195   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0907 00:15:03.852269   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:15:03.879710   29917 provision.go:86] duration metric: configureAuth took 407.878448ms
	I0907 00:15:03.879735   29917 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:15:03.879942   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:15:03.880048   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:15:03.882814   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.883176   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:15:03.883215   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:15:03.883395   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:15:03.883621   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.883790   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:15:03.883910   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:15:03.884065   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:15:03.884458   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:15:03.884482   29917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:16:34.327107   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:16:34.327137   29917 machine.go:91] provisioned docker machine in 1m31.110073184s
	I0907 00:16:34.327151   29917 start.go:300] post-start starting for "multinode-816061-m02" (driver="kvm2")
	I0907 00:16:34.327164   29917 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:16:34.327191   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:16:34.327547   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:16:34.327596   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:16:34.330710   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.331159   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:34.331183   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.331325   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:16:34.331527   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:16:34.331681   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:16:34.331811   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:16:34.422167   29917 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:16:34.426459   29917 command_runner.go:130] > NAME=Buildroot
	I0907 00:16:34.426480   29917 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0907 00:16:34.426487   29917 command_runner.go:130] > ID=buildroot
	I0907 00:16:34.426494   29917 command_runner.go:130] > VERSION_ID=2021.02.12
	I0907 00:16:34.426501   29917 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0907 00:16:34.426579   29917 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:16:34.426609   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:16:34.426699   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:16:34.426767   29917 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:16:34.426791   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0907 00:16:34.426889   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:16:34.435176   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:16:34.459242   29917 start.go:303] post-start completed in 132.076358ms
	I0907 00:16:34.459265   29917 fix.go:56] fixHost completed within 1m31.263688954s
	I0907 00:16:34.459288   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:16:34.461581   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.461856   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:34.461892   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.462063   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:16:34.462253   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:16:34.462384   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:16:34.462474   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:16:34.462588   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:16:34.463040   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0907 00:16:34.463055   29917 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:16:34.579690   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694045794.573385245
	
	I0907 00:16:34.579714   29917 fix.go:206] guest clock: 1694045794.573385245
	I0907 00:16:34.579724   29917 fix.go:219] Guest: 2023-09-07 00:16:34.573385245 +0000 UTC Remote: 2023-09-07 00:16:34.459269274 +0000 UTC m=+452.276650680 (delta=114.115971ms)
	I0907 00:16:34.579743   29917 fix.go:190] guest clock delta is within tolerance: 114.115971ms
	I0907 00:16:34.579749   29917 start.go:83] releasing machines lock for "multinode-816061-m02", held for 1m31.384182389s
	I0907 00:16:34.579772   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:16:34.580026   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:16:34.582580   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.582916   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:34.582963   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.584953   29917 out.go:177] * Found network options:
	I0907 00:16:34.586515   29917 out.go:177]   - NO_PROXY=192.168.39.212
	W0907 00:16:34.587965   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:16:34.587992   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:16:34.588495   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:16:34.588709   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:16:34.588798   29917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:16:34.588838   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	W0907 00:16:34.588925   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:16:34.588995   29917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:16:34.589021   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:16:34.591584   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.591649   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.591960   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:34.591988   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.592013   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:34.592034   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:34.592134   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:16:34.592316   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:16:34.592329   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:16:34.592488   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:16:34.592492   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:16:34.592644   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:16:34.592676   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:16:34.592773   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:16:34.825275   29917 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0907 00:16:34.825297   29917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:16:34.831226   29917 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0907 00:16:34.831398   29917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:16:34.831457   29917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:16:34.839783   29917 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:16:34.839804   29917 start.go:466] detecting cgroup driver to use...
	I0907 00:16:34.839849   29917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:16:34.853292   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:16:34.865535   29917 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:16:34.865589   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:16:34.878907   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:16:34.891338   29917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:16:35.021256   29917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:16:35.171545   29917 docker.go:212] disabling docker service ...
	I0907 00:16:35.171625   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:16:35.208376   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:16:35.246697   29917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:16:35.401415   29917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:16:35.545728   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:16:35.559266   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:16:35.577289   29917 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0907 00:16:35.577327   29917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:16:35.577379   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:16:35.588110   29917 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:16:35.588178   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:16:35.598609   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:16:35.609181   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:16:35.619658   29917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:16:35.638583   29917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:16:35.648703   29917 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0907 00:16:35.648759   29917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:16:35.658566   29917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:16:35.780382   29917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:16:38.402065   29917 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.621630239s)
	I0907 00:16:38.402096   29917 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:16:38.402149   29917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:16:38.407534   29917 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0907 00:16:38.407561   29917 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0907 00:16:38.407573   29917 command_runner.go:130] > Device: 16h/22d	Inode: 1228        Links: 1
	I0907 00:16:38.407584   29917 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:16:38.407594   29917 command_runner.go:130] > Access: 2023-09-07 00:16:38.307246047 +0000
	I0907 00:16:38.407603   29917 command_runner.go:130] > Modify: 2023-09-07 00:16:38.307246047 +0000
	I0907 00:16:38.407616   29917 command_runner.go:130] > Change: 2023-09-07 00:16:38.307246047 +0000
	I0907 00:16:38.407622   29917 command_runner.go:130] >  Birth: -
	I0907 00:16:38.407939   29917 start.go:534] Will wait 60s for crictl version
	I0907 00:16:38.407994   29917 ssh_runner.go:195] Run: which crictl
	I0907 00:16:38.411544   29917 command_runner.go:130] > /usr/bin/crictl
	I0907 00:16:38.411654   29917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:16:38.446679   29917 command_runner.go:130] > Version:  0.1.0
	I0907 00:16:38.446704   29917 command_runner.go:130] > RuntimeName:  cri-o
	I0907 00:16:38.446711   29917 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0907 00:16:38.446720   29917 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0907 00:16:38.446884   29917 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:16:38.446955   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:16:38.496856   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:16:38.496881   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:16:38.496891   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:16:38.496897   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:16:38.496917   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:16:38.496925   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:16:38.496931   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:16:38.496939   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:16:38.496949   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:16:38.496959   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:16:38.496964   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:16:38.496971   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:16:38.497049   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:16:38.541682   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:16:38.541706   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:16:38.541717   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:16:38.541724   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:16:38.541733   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:16:38.541740   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:16:38.541747   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:16:38.541754   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:16:38.541760   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:16:38.541770   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:16:38.541776   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:16:38.541782   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:16:38.545122   29917 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:16:38.546564   29917 out.go:177]   - env NO_PROXY=192.168.39.212
	I0907 00:16:38.547967   29917 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:16:38.550540   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:38.550974   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:16:38.551003   29917 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:16:38.551197   29917 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:16:38.555459   29917 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0907 00:16:38.555790   29917 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061 for IP: 192.168.39.44
	I0907 00:16:38.555826   29917 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:16:38.555992   29917 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:16:38.556029   29917 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:16:38.556041   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0907 00:16:38.556055   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0907 00:16:38.556067   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0907 00:16:38.556078   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0907 00:16:38.556145   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:16:38.556173   29917 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:16:38.556183   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:16:38.556210   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:16:38.556235   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:16:38.556259   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:16:38.556296   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:16:38.556320   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0907 00:16:38.556332   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0907 00:16:38.556346   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:16:38.556711   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:16:38.580960   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:16:38.603715   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:16:38.626302   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:16:38.648541   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:16:38.670913   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:16:38.693480   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:16:38.717357   29917 ssh_runner.go:195] Run: openssl version
	I0907 00:16:38.723636   29917 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0907 00:16:38.723710   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:16:38.733917   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:16:38.740968   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:16:38.741170   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:16:38.741215   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:16:38.746607   29917 command_runner.go:130] > 3ec20f2e
	I0907 00:16:38.746803   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:16:38.754918   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:16:38.766857   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:16:38.771202   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:16:38.771455   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:16:38.771501   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:16:38.777005   29917 command_runner.go:130] > b5213941
	I0907 00:16:38.777061   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:16:38.785067   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:16:38.794434   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:16:38.798812   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:16:38.798834   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:16:38.798859   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:16:38.804351   29917 command_runner.go:130] > 51391683
	I0907 00:16:38.804405   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:16:38.812097   29917 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:16:38.816091   29917 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:16:38.816276   29917 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:16:38.816343   29917 ssh_runner.go:195] Run: crio config
	I0907 00:16:38.869667   29917 command_runner.go:130] ! time="2023-09-07 00:16:38.863424790Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0907 00:16:38.869698   29917 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0907 00:16:38.875547   29917 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0907 00:16:38.875571   29917 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0907 00:16:38.875581   29917 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0907 00:16:38.875587   29917 command_runner.go:130] > #
	I0907 00:16:38.875601   29917 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0907 00:16:38.875610   29917 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0907 00:16:38.875623   29917 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0907 00:16:38.875640   29917 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0907 00:16:38.875649   29917 command_runner.go:130] > # reload'.
	I0907 00:16:38.875659   29917 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0907 00:16:38.875672   29917 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0907 00:16:38.875686   29917 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0907 00:16:38.875698   29917 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0907 00:16:38.875707   29917 command_runner.go:130] > [crio]
	I0907 00:16:38.875716   29917 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0907 00:16:38.875727   29917 command_runner.go:130] > # containers images, in this directory.
	I0907 00:16:38.875738   29917 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0907 00:16:38.875760   29917 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0907 00:16:38.875772   29917 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0907 00:16:38.875782   29917 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0907 00:16:38.875795   29917 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0907 00:16:38.875805   29917 command_runner.go:130] > storage_driver = "overlay"
	I0907 00:16:38.875817   29917 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0907 00:16:38.875829   29917 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0907 00:16:38.875839   29917 command_runner.go:130] > storage_option = [
	I0907 00:16:38.875847   29917 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0907 00:16:38.875856   29917 command_runner.go:130] > ]
	I0907 00:16:38.875866   29917 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0907 00:16:38.875879   29917 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0907 00:16:38.875889   29917 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0907 00:16:38.875901   29917 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0907 00:16:38.875914   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0907 00:16:38.875924   29917 command_runner.go:130] > # always happen on a node reboot
	I0907 00:16:38.875938   29917 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0907 00:16:38.875953   29917 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0907 00:16:38.875967   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0907 00:16:38.875984   29917 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0907 00:16:38.875995   29917 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0907 00:16:38.876008   29917 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0907 00:16:38.876024   29917 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0907 00:16:38.876033   29917 command_runner.go:130] > # internal_wipe = true
	I0907 00:16:38.876043   29917 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0907 00:16:38.876055   29917 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0907 00:16:38.876067   29917 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0907 00:16:38.876078   29917 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0907 00:16:38.876090   29917 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0907 00:16:38.876096   29917 command_runner.go:130] > [crio.api]
	I0907 00:16:38.876108   29917 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0907 00:16:38.876115   29917 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0907 00:16:38.876124   29917 command_runner.go:130] > # IP address on which the stream server will listen.
	I0907 00:16:38.876134   29917 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0907 00:16:38.876143   29917 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0907 00:16:38.876151   29917 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0907 00:16:38.876161   29917 command_runner.go:130] > # stream_port = "0"
	I0907 00:16:38.876170   29917 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0907 00:16:38.876180   29917 command_runner.go:130] > # stream_enable_tls = false
	I0907 00:16:38.876190   29917 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0907 00:16:38.876200   29917 command_runner.go:130] > # stream_idle_timeout = ""
	I0907 00:16:38.876210   29917 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0907 00:16:38.876223   29917 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0907 00:16:38.876231   29917 command_runner.go:130] > # minutes.
	I0907 00:16:38.876238   29917 command_runner.go:130] > # stream_tls_cert = ""
	I0907 00:16:38.876251   29917 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0907 00:16:38.876264   29917 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0907 00:16:38.876273   29917 command_runner.go:130] > # stream_tls_key = ""
	I0907 00:16:38.876283   29917 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0907 00:16:38.876296   29917 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0907 00:16:38.876308   29917 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0907 00:16:38.876315   29917 command_runner.go:130] > # stream_tls_ca = ""
	I0907 00:16:38.876330   29917 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:16:38.876341   29917 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0907 00:16:38.876354   29917 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:16:38.876363   29917 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0907 00:16:38.876380   29917 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0907 00:16:38.876390   29917 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0907 00:16:38.876394   29917 command_runner.go:130] > [crio.runtime]
	I0907 00:16:38.876402   29917 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0907 00:16:38.876409   29917 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0907 00:16:38.876414   29917 command_runner.go:130] > # "nofile=1024:2048"
	I0907 00:16:38.876422   29917 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0907 00:16:38.876426   29917 command_runner.go:130] > # default_ulimits = [
	I0907 00:16:38.876432   29917 command_runner.go:130] > # ]
	I0907 00:16:38.876439   29917 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0907 00:16:38.876446   29917 command_runner.go:130] > # no_pivot = false
	I0907 00:16:38.876451   29917 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0907 00:16:38.876458   29917 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0907 00:16:38.876464   29917 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0907 00:16:38.876470   29917 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0907 00:16:38.876478   29917 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0907 00:16:38.876485   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:16:38.876491   29917 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0907 00:16:38.876496   29917 command_runner.go:130] > # Cgroup setting for conmon
	I0907 00:16:38.876502   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0907 00:16:38.876508   29917 command_runner.go:130] > conmon_cgroup = "pod"
	I0907 00:16:38.876514   29917 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0907 00:16:38.876521   29917 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0907 00:16:38.876528   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:16:38.876534   29917 command_runner.go:130] > conmon_env = [
	I0907 00:16:38.876539   29917 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0907 00:16:38.876545   29917 command_runner.go:130] > ]
	I0907 00:16:38.876550   29917 command_runner.go:130] > # Additional environment variables to set for all the
	I0907 00:16:38.876557   29917 command_runner.go:130] > # containers. These are overridden if set in the
	I0907 00:16:38.876563   29917 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0907 00:16:38.876569   29917 command_runner.go:130] > # default_env = [
	I0907 00:16:38.876572   29917 command_runner.go:130] > # ]
	I0907 00:16:38.876578   29917 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0907 00:16:38.876585   29917 command_runner.go:130] > # selinux = false
	I0907 00:16:38.876591   29917 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0907 00:16:38.876597   29917 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0907 00:16:38.876604   29917 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0907 00:16:38.876608   29917 command_runner.go:130] > # seccomp_profile = ""
	I0907 00:16:38.876614   29917 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0907 00:16:38.876620   29917 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0907 00:16:38.876628   29917 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0907 00:16:38.876632   29917 command_runner.go:130] > # which might increase security.
	I0907 00:16:38.876642   29917 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0907 00:16:38.876652   29917 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0907 00:16:38.876665   29917 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0907 00:16:38.876678   29917 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0907 00:16:38.876691   29917 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0907 00:16:38.876699   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:16:38.876710   29917 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0907 00:16:38.876723   29917 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0907 00:16:38.876733   29917 command_runner.go:130] > # the cgroup blockio controller.
	I0907 00:16:38.876740   29917 command_runner.go:130] > # blockio_config_file = ""
	I0907 00:16:38.876753   29917 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0907 00:16:38.876762   29917 command_runner.go:130] > # irqbalance daemon.
	I0907 00:16:38.876770   29917 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0907 00:16:38.876782   29917 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0907 00:16:38.876788   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:16:38.876794   29917 command_runner.go:130] > # rdt_config_file = ""
	I0907 00:16:38.876806   29917 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0907 00:16:38.876814   29917 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0907 00:16:38.876824   29917 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0907 00:16:38.876834   29917 command_runner.go:130] > # separate_pull_cgroup = ""
	I0907 00:16:38.876845   29917 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0907 00:16:38.876857   29917 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0907 00:16:38.876866   29917 command_runner.go:130] > # will be added.
	I0907 00:16:38.876873   29917 command_runner.go:130] > # default_capabilities = [
	I0907 00:16:38.876882   29917 command_runner.go:130] > # 	"CHOWN",
	I0907 00:16:38.876888   29917 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0907 00:16:38.876897   29917 command_runner.go:130] > # 	"FSETID",
	I0907 00:16:38.876905   29917 command_runner.go:130] > # 	"FOWNER",
	I0907 00:16:38.876914   29917 command_runner.go:130] > # 	"SETGID",
	I0907 00:16:38.876920   29917 command_runner.go:130] > # 	"SETUID",
	I0907 00:16:38.876926   29917 command_runner.go:130] > # 	"SETPCAP",
	I0907 00:16:38.876940   29917 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0907 00:16:38.876949   29917 command_runner.go:130] > # 	"KILL",
	I0907 00:16:38.876957   29917 command_runner.go:130] > # ]
	I0907 00:16:38.876967   29917 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0907 00:16:38.876980   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:16:38.876990   29917 command_runner.go:130] > # default_sysctls = [
	I0907 00:16:38.876998   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877009   29917 command_runner.go:130] > # List of devices on the host that a
	I0907 00:16:38.877022   29917 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0907 00:16:38.877032   29917 command_runner.go:130] > # allowed_devices = [
	I0907 00:16:38.877040   29917 command_runner.go:130] > # 	"/dev/fuse",
	I0907 00:16:38.877049   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877057   29917 command_runner.go:130] > # List of additional devices. specified as
	I0907 00:16:38.877072   29917 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0907 00:16:38.877084   29917 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0907 00:16:38.877114   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:16:38.877130   29917 command_runner.go:130] > # additional_devices = [
	I0907 00:16:38.877136   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877144   29917 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0907 00:16:38.877154   29917 command_runner.go:130] > # cdi_spec_dirs = [
	I0907 00:16:38.877163   29917 command_runner.go:130] > # 	"/etc/cdi",
	I0907 00:16:38.877172   29917 command_runner.go:130] > # 	"/var/run/cdi",
	I0907 00:16:38.877180   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877193   29917 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0907 00:16:38.877207   29917 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0907 00:16:38.877216   29917 command_runner.go:130] > # Defaults to false.
	I0907 00:16:38.877227   29917 command_runner.go:130] > # device_ownership_from_security_context = false
	I0907 00:16:38.877241   29917 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0907 00:16:38.877252   29917 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0907 00:16:38.877261   29917 command_runner.go:130] > # hooks_dir = [
	I0907 00:16:38.877271   29917 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0907 00:16:38.877280   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877292   29917 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0907 00:16:38.877304   29917 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0907 00:16:38.877314   29917 command_runner.go:130] > # its default mounts from the following two files:
	I0907 00:16:38.877322   29917 command_runner.go:130] > #
	I0907 00:16:38.877331   29917 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0907 00:16:38.877343   29917 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0907 00:16:38.877356   29917 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0907 00:16:38.877364   29917 command_runner.go:130] > #
	I0907 00:16:38.877377   29917 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0907 00:16:38.877390   29917 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0907 00:16:38.877403   29917 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0907 00:16:38.877415   29917 command_runner.go:130] > #      only add mounts it finds in this file.
	I0907 00:16:38.877422   29917 command_runner.go:130] > #
	I0907 00:16:38.877428   29917 command_runner.go:130] > # default_mounts_file = ""
	I0907 00:16:38.877437   29917 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0907 00:16:38.877446   29917 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0907 00:16:38.877452   29917 command_runner.go:130] > pids_limit = 1024
	I0907 00:16:38.877459   29917 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0907 00:16:38.877468   29917 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0907 00:16:38.877477   29917 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0907 00:16:38.877487   29917 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0907 00:16:38.877491   29917 command_runner.go:130] > # log_size_max = -1
	I0907 00:16:38.877500   29917 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0907 00:16:38.877506   29917 command_runner.go:130] > # log_to_journald = false
	I0907 00:16:38.877512   29917 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0907 00:16:38.877519   29917 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0907 00:16:38.877524   29917 command_runner.go:130] > # Path to directory for container attach sockets.
	I0907 00:16:38.877531   29917 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0907 00:16:38.877536   29917 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0907 00:16:38.877543   29917 command_runner.go:130] > # bind_mount_prefix = ""
	I0907 00:16:38.877548   29917 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0907 00:16:38.877555   29917 command_runner.go:130] > # read_only = false
	I0907 00:16:38.877561   29917 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0907 00:16:38.877569   29917 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0907 00:16:38.877574   29917 command_runner.go:130] > # live configuration reload.
	I0907 00:16:38.877580   29917 command_runner.go:130] > # log_level = "info"
	I0907 00:16:38.877587   29917 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0907 00:16:38.877594   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:16:38.877598   29917 command_runner.go:130] > # log_filter = ""
	I0907 00:16:38.877606   29917 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0907 00:16:38.877614   29917 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0907 00:16:38.877620   29917 command_runner.go:130] > # separated by comma.
	I0907 00:16:38.877623   29917 command_runner.go:130] > # uid_mappings = ""
	I0907 00:16:38.877632   29917 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0907 00:16:38.877640   29917 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0907 00:16:38.877646   29917 command_runner.go:130] > # separated by comma.
	I0907 00:16:38.877650   29917 command_runner.go:130] > # gid_mappings = ""
	I0907 00:16:38.877658   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0907 00:16:38.877664   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:16:38.877672   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:16:38.877678   29917 command_runner.go:130] > # minimum_mappable_uid = -1
	I0907 00:16:38.877686   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0907 00:16:38.877695   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:16:38.877703   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:16:38.877708   29917 command_runner.go:130] > # minimum_mappable_gid = -1
	I0907 00:16:38.877716   29917 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0907 00:16:38.877725   29917 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0907 00:16:38.877733   29917 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0907 00:16:38.877740   29917 command_runner.go:130] > # ctr_stop_timeout = 30
	I0907 00:16:38.877746   29917 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0907 00:16:38.877754   29917 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0907 00:16:38.877759   29917 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0907 00:16:38.877766   29917 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0907 00:16:38.877773   29917 command_runner.go:130] > drop_infra_ctr = false
	I0907 00:16:38.877781   29917 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0907 00:16:38.877789   29917 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0907 00:16:38.877801   29917 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0907 00:16:38.877812   29917 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0907 00:16:38.877825   29917 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0907 00:16:38.877836   29917 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0907 00:16:38.877846   29917 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0907 00:16:38.877857   29917 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0907 00:16:38.877869   29917 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0907 00:16:38.877882   29917 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0907 00:16:38.877895   29917 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0907 00:16:38.877905   29917 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0907 00:16:38.877912   29917 command_runner.go:130] > # default_runtime = "runc"
	I0907 00:16:38.877917   29917 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0907 00:16:38.877927   29917 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0907 00:16:38.877941   29917 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0907 00:16:38.877948   29917 command_runner.go:130] > # creation as a file is not desired either.
	I0907 00:16:38.877957   29917 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0907 00:16:38.877964   29917 command_runner.go:130] > # the hostname is being managed dynamically.
	I0907 00:16:38.877969   29917 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0907 00:16:38.877974   29917 command_runner.go:130] > # ]
	I0907 00:16:38.877981   29917 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0907 00:16:38.877990   29917 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0907 00:16:38.877999   29917 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0907 00:16:38.878007   29917 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0907 00:16:38.878012   29917 command_runner.go:130] > #
	I0907 00:16:38.878017   29917 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0907 00:16:38.878024   29917 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0907 00:16:38.878028   29917 command_runner.go:130] > #  runtime_type = "oci"
	I0907 00:16:38.878034   29917 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0907 00:16:38.878041   29917 command_runner.go:130] > #  privileged_without_host_devices = false
	I0907 00:16:38.878045   29917 command_runner.go:130] > #  allowed_annotations = []
	I0907 00:16:38.878051   29917 command_runner.go:130] > # Where:
	I0907 00:16:38.878057   29917 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0907 00:16:38.878067   29917 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0907 00:16:38.878075   29917 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0907 00:16:38.878083   29917 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0907 00:16:38.878089   29917 command_runner.go:130] > #   in $PATH.
	I0907 00:16:38.878095   29917 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0907 00:16:38.878103   29917 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0907 00:16:38.878112   29917 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0907 00:16:38.878116   29917 command_runner.go:130] > #   state.
	I0907 00:16:38.878125   29917 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0907 00:16:38.878131   29917 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0907 00:16:38.878140   29917 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0907 00:16:38.878148   29917 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0907 00:16:38.878155   29917 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0907 00:16:38.878164   29917 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0907 00:16:38.878171   29917 command_runner.go:130] > #   The currently recognized values are:
	I0907 00:16:38.878177   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0907 00:16:38.878187   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0907 00:16:38.878195   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0907 00:16:38.878202   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0907 00:16:38.878211   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0907 00:16:38.878219   29917 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0907 00:16:38.878228   29917 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0907 00:16:38.878236   29917 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0907 00:16:38.878243   29917 command_runner.go:130] > #   should be moved to the container's cgroup
	I0907 00:16:38.878247   29917 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0907 00:16:38.878254   29917 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0907 00:16:38.878258   29917 command_runner.go:130] > runtime_type = "oci"
	I0907 00:16:38.878264   29917 command_runner.go:130] > runtime_root = "/run/runc"
	I0907 00:16:38.878268   29917 command_runner.go:130] > runtime_config_path = ""
	I0907 00:16:38.878275   29917 command_runner.go:130] > monitor_path = ""
	I0907 00:16:38.878280   29917 command_runner.go:130] > monitor_cgroup = ""
	I0907 00:16:38.878284   29917 command_runner.go:130] > monitor_exec_cgroup = ""
	I0907 00:16:38.878292   29917 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0907 00:16:38.878298   29917 command_runner.go:130] > # running containers
	I0907 00:16:38.878302   29917 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0907 00:16:38.878310   29917 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0907 00:16:38.878336   29917 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0907 00:16:38.878344   29917 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0907 00:16:38.878351   29917 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0907 00:16:38.878355   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0907 00:16:38.878362   29917 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0907 00:16:38.878367   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0907 00:16:38.878374   29917 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0907 00:16:38.878378   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0907 00:16:38.878387   29917 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0907 00:16:38.878394   29917 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0907 00:16:38.878402   29917 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0907 00:16:38.878412   29917 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0907 00:16:38.878421   29917 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0907 00:16:38.878429   29917 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0907 00:16:38.878442   29917 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0907 00:16:38.878452   29917 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0907 00:16:38.878458   29917 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0907 00:16:38.878467   29917 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0907 00:16:38.878473   29917 command_runner.go:130] > # Example:
	I0907 00:16:38.878478   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0907 00:16:38.878485   29917 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0907 00:16:38.878490   29917 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0907 00:16:38.878497   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0907 00:16:38.878501   29917 command_runner.go:130] > # cpuset = 0
	I0907 00:16:38.878505   29917 command_runner.go:130] > # cpushares = "0-1"
	I0907 00:16:38.878509   29917 command_runner.go:130] > # Where:
	I0907 00:16:38.878513   29917 command_runner.go:130] > # The workload name is workload-type.
	I0907 00:16:38.878521   29917 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0907 00:16:38.878527   29917 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0907 00:16:38.878532   29917 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0907 00:16:38.878542   29917 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0907 00:16:38.878547   29917 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0907 00:16:38.878551   29917 command_runner.go:130] > # 
	I0907 00:16:38.878557   29917 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0907 00:16:38.878562   29917 command_runner.go:130] > #
	I0907 00:16:38.878568   29917 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0907 00:16:38.878574   29917 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0907 00:16:38.878582   29917 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0907 00:16:38.878589   29917 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0907 00:16:38.878595   29917 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0907 00:16:38.878599   29917 command_runner.go:130] > [crio.image]
	I0907 00:16:38.878605   29917 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0907 00:16:38.878612   29917 command_runner.go:130] > # default_transport = "docker://"
	I0907 00:16:38.878618   29917 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0907 00:16:38.878626   29917 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:16:38.878630   29917 command_runner.go:130] > # global_auth_file = ""
	I0907 00:16:38.878636   29917 command_runner.go:130] > # The image used to instantiate infra containers.
	I0907 00:16:38.878643   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:16:38.878648   29917 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0907 00:16:38.878656   29917 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0907 00:16:38.878662   29917 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:16:38.878669   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:16:38.878673   29917 command_runner.go:130] > # pause_image_auth_file = ""
	I0907 00:16:38.878679   29917 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0907 00:16:38.878685   29917 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0907 00:16:38.878693   29917 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0907 00:16:38.878699   29917 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0907 00:16:38.878705   29917 command_runner.go:130] > # pause_command = "/pause"
	I0907 00:16:38.878711   29917 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0907 00:16:38.878719   29917 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0907 00:16:38.878725   29917 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0907 00:16:38.878733   29917 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0907 00:16:38.878738   29917 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0907 00:16:38.878744   29917 command_runner.go:130] > # signature_policy = ""
	I0907 00:16:38.878750   29917 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0907 00:16:38.878759   29917 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0907 00:16:38.878763   29917 command_runner.go:130] > # changing them here.
	I0907 00:16:38.878769   29917 command_runner.go:130] > # insecure_registries = [
	I0907 00:16:38.878772   29917 command_runner.go:130] > # ]
	I0907 00:16:38.878797   29917 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0907 00:16:38.878809   29917 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0907 00:16:38.878819   29917 command_runner.go:130] > # image_volumes = "mkdir"
	I0907 00:16:38.878827   29917 command_runner.go:130] > # Temporary directory to use for storing big files
	I0907 00:16:38.878837   29917 command_runner.go:130] > # big_files_temporary_dir = ""
	I0907 00:16:38.878846   29917 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0907 00:16:38.878856   29917 command_runner.go:130] > # CNI plugins.
	I0907 00:16:38.878862   29917 command_runner.go:130] > [crio.network]
	I0907 00:16:38.878874   29917 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0907 00:16:38.878885   29917 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0907 00:16:38.878892   29917 command_runner.go:130] > # cni_default_network = ""
	I0907 00:16:38.878898   29917 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0907 00:16:38.878904   29917 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0907 00:16:38.878911   29917 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0907 00:16:38.878917   29917 command_runner.go:130] > # plugin_dirs = [
	I0907 00:16:38.878921   29917 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0907 00:16:38.878925   29917 command_runner.go:130] > # ]
	I0907 00:16:38.878934   29917 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0907 00:16:38.878940   29917 command_runner.go:130] > [crio.metrics]
	I0907 00:16:38.878945   29917 command_runner.go:130] > # Globally enable or disable metrics support.
	I0907 00:16:38.878950   29917 command_runner.go:130] > enable_metrics = true
	I0907 00:16:38.878954   29917 command_runner.go:130] > # Specify enabled metrics collectors.
	I0907 00:16:38.878961   29917 command_runner.go:130] > # Per default all metrics are enabled.
	I0907 00:16:38.878967   29917 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0907 00:16:38.878976   29917 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0907 00:16:38.878982   29917 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0907 00:16:38.878988   29917 command_runner.go:130] > # metrics_collectors = [
	I0907 00:16:38.878992   29917 command_runner.go:130] > # 	"operations",
	I0907 00:16:38.878999   29917 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0907 00:16:38.879004   29917 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0907 00:16:38.879009   29917 command_runner.go:130] > # 	"operations_errors",
	I0907 00:16:38.879013   29917 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0907 00:16:38.879019   29917 command_runner.go:130] > # 	"image_pulls_by_name",
	I0907 00:16:38.879024   29917 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0907 00:16:38.879029   29917 command_runner.go:130] > # 	"image_pulls_failures",
	I0907 00:16:38.879033   29917 command_runner.go:130] > # 	"image_pulls_successes",
	I0907 00:16:38.879039   29917 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0907 00:16:38.879043   29917 command_runner.go:130] > # 	"image_layer_reuse",
	I0907 00:16:38.879050   29917 command_runner.go:130] > # 	"containers_oom_total",
	I0907 00:16:38.879054   29917 command_runner.go:130] > # 	"containers_oom",
	I0907 00:16:38.879060   29917 command_runner.go:130] > # 	"processes_defunct",
	I0907 00:16:38.879064   29917 command_runner.go:130] > # 	"operations_total",
	I0907 00:16:38.879069   29917 command_runner.go:130] > # 	"operations_latency_seconds",
	I0907 00:16:38.879075   29917 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0907 00:16:38.879079   29917 command_runner.go:130] > # 	"operations_errors_total",
	I0907 00:16:38.879086   29917 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0907 00:16:38.879092   29917 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0907 00:16:38.879099   29917 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0907 00:16:38.879103   29917 command_runner.go:130] > # 	"image_pulls_success_total",
	I0907 00:16:38.879110   29917 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0907 00:16:38.879120   29917 command_runner.go:130] > # 	"containers_oom_count_total",
	I0907 00:16:38.879125   29917 command_runner.go:130] > # ]
	I0907 00:16:38.879130   29917 command_runner.go:130] > # The port on which the metrics server will listen.
	I0907 00:16:38.879136   29917 command_runner.go:130] > # metrics_port = 9090
	I0907 00:16:38.879142   29917 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0907 00:16:38.879148   29917 command_runner.go:130] > # metrics_socket = ""
	I0907 00:16:38.879153   29917 command_runner.go:130] > # The certificate for the secure metrics server.
	I0907 00:16:38.879162   29917 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0907 00:16:38.879170   29917 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0907 00:16:38.879175   29917 command_runner.go:130] > # certificate on any modification event.
	I0907 00:16:38.879181   29917 command_runner.go:130] > # metrics_cert = ""
	I0907 00:16:38.879187   29917 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0907 00:16:38.879195   29917 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0907 00:16:38.879199   29917 command_runner.go:130] > # metrics_key = ""
	I0907 00:16:38.879206   29917 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0907 00:16:38.879211   29917 command_runner.go:130] > [crio.tracing]
	I0907 00:16:38.879217   29917 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0907 00:16:38.879223   29917 command_runner.go:130] > # enable_tracing = false
	I0907 00:16:38.879229   29917 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0907 00:16:38.879235   29917 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0907 00:16:38.879241   29917 command_runner.go:130] > # Number of samples to collect per million spans.
	I0907 00:16:38.879248   29917 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0907 00:16:38.879254   29917 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0907 00:16:38.879260   29917 command_runner.go:130] > [crio.stats]
	I0907 00:16:38.879266   29917 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0907 00:16:38.879273   29917 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0907 00:16:38.879280   29917 command_runner.go:130] > # stats_collection_period = 0
	I0907 00:16:38.879334   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:16:38.879343   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:16:38.879351   29917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:16:38.879370   29917 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-816061 NodeName:multinode-816061-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:16:38.879481   29917 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-816061-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:16:38.879525   29917 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-816061-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:16:38.879572   29917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:16:38.888433   29917 command_runner.go:130] > kubeadm
	I0907 00:16:38.888451   29917 command_runner.go:130] > kubectl
	I0907 00:16:38.888455   29917 command_runner.go:130] > kubelet
	I0907 00:16:38.888475   29917 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:16:38.888524   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0907 00:16:38.897345   29917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0907 00:16:38.913585   29917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:16:38.929578   29917 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0907 00:16:38.933136   29917 command_runner.go:130] > 192.168.39.212	control-plane.minikube.internal
	I0907 00:16:38.933266   29917 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:16:38.933503   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:16:38.933625   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:16:38.933651   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:16:38.948219   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38973
	I0907 00:16:38.948633   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:16:38.949141   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:16:38.949158   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:16:38.949469   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:16:38.949678   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:16:38.949837   29917 start.go:301] JoinCluster: &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:16:38.949986   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0907 00:16:38.950004   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:16:38.952667   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:16:38.953038   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:16:38.953067   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:16:38.953196   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:16:38.953341   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:16:38.953481   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:16:38.953566   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:16:39.142918   29917 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xy5i4j.l7f20svum8hjqni9 --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:16:39.143000   29917 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:16:39.143051   29917 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:16:39.143472   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:16:39.143508   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:16:39.158088   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0907 00:16:39.158505   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:16:39.158963   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:16:39.158983   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:16:39.159310   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:16:39.159530   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:16:39.159701   29917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-816061-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0907 00:16:39.159729   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:16:39.162615   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:16:39.163039   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:16:39.163072   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:16:39.163224   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:16:39.163382   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:16:39.163553   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:16:39.163705   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:16:39.377279   29917 command_runner.go:130] > node/multinode-816061-m02 cordoned
	I0907 00:16:42.422706   29917 command_runner.go:130] > pod "busybox-5bc68d56bd-mq552" has DeletionTimestamp older than 1 seconds, skipping
	I0907 00:16:42.422728   29917 command_runner.go:130] > node/multinode-816061-m02 drained
	I0907 00:16:42.424933   29917 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0907 00:16:42.424960   29917 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-gdck2, kube-system/kube-proxy-2wswp
	I0907 00:16:42.424987   29917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-816061-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.265258568s)
	I0907 00:16:42.425003   29917 node.go:108] successfully drained node "m02"
	I0907 00:16:42.425446   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:16:42.425758   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:16:42.426188   29917 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0907 00:16:42.426243   29917 round_trippers.go:463] DELETE https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:16:42.426249   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:42.426261   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:42.426271   29917 round_trippers.go:473]     Content-Type: application/json
	I0907 00:16:42.426281   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:42.440291   29917 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0907 00:16:42.440314   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:42.440326   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:42.440335   29917 round_trippers.go:580]     Content-Length: 171
	I0907 00:16:42.440342   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:42 GMT
	I0907 00:16:42.440350   29917 round_trippers.go:580]     Audit-Id: c3badd6c-60f8-4f53-b0bd-d3b4321758ac
	I0907 00:16:42.440362   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:42.440370   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:42.440379   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:42.440444   29917 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-816061-m02","kind":"nodes","uid":"20b50f58-79b7-44b5-afb8-797975c71f82"}}
	I0907 00:16:42.440488   29917 node.go:124] successfully deleted node "m02"
	I0907 00:16:42.440502   29917 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:16:42.440527   29917 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:16:42.440548   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xy5i4j.l7f20svum8hjqni9 --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-816061-m02"
	I0907 00:16:42.518722   29917 command_runner.go:130] ! W0907 00:16:42.512244    2788 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0907 00:16:42.518875   29917 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0907 00:16:42.674773   29917 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0907 00:16:42.674819   29917 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0907 00:16:43.463818   29917 command_runner.go:130] > [preflight] Running pre-flight checks
	I0907 00:16:43.463846   29917 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0907 00:16:43.463862   29917 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0907 00:16:43.463881   29917 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:16:43.463893   29917 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:16:43.463901   29917 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0907 00:16:43.463915   29917 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0907 00:16:43.463927   29917 command_runner.go:130] > This node has joined the cluster:
	I0907 00:16:43.463942   29917 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0907 00:16:43.463953   29917 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0907 00:16:43.463963   29917 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0907 00:16:43.463979   29917 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xy5i4j.l7f20svum8hjqni9 --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-816061-m02": (1.023420096s)
	I0907 00:16:43.463997   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0907 00:16:43.738170   29917 start.go:303] JoinCluster complete in 4.788326208s
	I0907 00:16:43.738199   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:16:43.738207   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:16:43.738267   29917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:16:43.744464   29917 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0907 00:16:43.744480   29917 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0907 00:16:43.744487   29917 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0907 00:16:43.744493   29917 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:16:43.744498   29917 command_runner.go:130] > Access: 2023-09-07 00:14:12.931697280 +0000
	I0907 00:16:43.744506   29917 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0907 00:16:43.744513   29917 command_runner.go:130] > Change: 2023-09-07 00:14:11.094697280 +0000
	I0907 00:16:43.744518   29917 command_runner.go:130] >  Birth: -
	I0907 00:16:43.744567   29917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 00:16:43.744587   29917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 00:16:43.764110   29917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:16:44.128338   29917 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:16:44.134813   29917 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:16:44.139633   29917 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0907 00:16:44.149533   29917 command_runner.go:130] > daemonset.apps/kindnet configured
	I0907 00:16:44.152584   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:16:44.152781   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:16:44.153045   29917 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:16:44.153055   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.153063   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.153069   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.155578   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.155598   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.155605   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.155611   29917 round_trippers.go:580]     Content-Length: 291
	I0907 00:16:44.155617   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.155625   29917 round_trippers.go:580]     Audit-Id: f9d91511-59d5-4a74-9a66-6ff85b39ab82
	I0907 00:16:44.155630   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.155639   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.155647   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.155666   29917 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"900","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0907 00:16:44.155742   29917 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-816061" context rescaled to 1 replicas
	I0907 00:16:44.155768   29917 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0907 00:16:44.157713   29917 out.go:177] * Verifying Kubernetes components...
	I0907 00:16:44.159314   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:16:44.175902   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:16:44.176148   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:16:44.176433   29917 node_ready.go:35] waiting up to 6m0s for node "multinode-816061-m02" to be "Ready" ...
	I0907 00:16:44.176546   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:16:44.176557   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.176569   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.176582   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.181516   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:16:44.181539   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.181549   29917 round_trippers.go:580]     Audit-Id: d4595410-8e96-4ee3-920d-d2643de82d8b
	I0907 00:16:44.181557   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.181566   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.181576   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.181583   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.181590   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.181746   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"15a4f37e-37a6-46f1-a8e3-c2ab0e788ddf","resourceVersion":"1059","creationTimestamp":"2023-09-07T00:16:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0907 00:16:44.181976   29917 node_ready.go:49] node "multinode-816061-m02" has status "Ready":"True"
	I0907 00:16:44.181987   29917 node_ready.go:38] duration metric: took 5.499519ms waiting for node "multinode-816061-m02" to be "Ready" ...
	I0907 00:16:44.181994   29917 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:16:44.182043   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:16:44.182050   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.182057   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.182063   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.195897   29917 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0907 00:16:44.195922   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.195932   29917 round_trippers.go:580]     Audit-Id: e2ff9bf7-e942-4e62-9ff2-c16993d23c29
	I0907 00:16:44.195939   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.195948   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.195956   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.195971   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.195987   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.197579   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1063"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82085 chars]
	I0907 00:16:44.200038   29917 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.200104   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:16:44.200112   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.200120   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.200126   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.203952   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:16:44.203969   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.203978   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.203987   29917 round_trippers.go:580]     Audit-Id: 740835c7-2e70-4545-ab3b-4b1ef1b25f7c
	I0907 00:16:44.203996   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.204010   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.204023   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.204038   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.205069   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0907 00:16:44.205461   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:44.205475   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.205485   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.205495   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.207939   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.207954   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.207960   29917 round_trippers.go:580]     Audit-Id: 3427306b-269c-46a6-b8d7-5365264cf334
	I0907 00:16:44.207966   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.207973   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.207982   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.207993   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.208001   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.208290   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:44.208553   29917 pod_ready.go:92] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.208565   29917 pod_ready.go:81] duration metric: took 8.506909ms waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.208574   29917 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.208621   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:16:44.208629   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.208635   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.208642   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.211042   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.211062   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.211071   29917 round_trippers.go:580]     Audit-Id: 50d1b917-fdf6-40da-8343-dd973310efd4
	I0907 00:16:44.211079   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.211087   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.211096   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.211106   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.211122   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.211375   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"910","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0907 00:16:44.211715   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:44.211728   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.211738   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.211747   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.214275   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.214294   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.214304   29917 round_trippers.go:580]     Audit-Id: ac88ed90-53f6-4c84-a196-5c90e8cb6534
	I0907 00:16:44.214310   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.214316   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.214321   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.214330   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.214341   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.214590   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:44.214988   29917 pod_ready.go:92] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.215007   29917 pod_ready.go:81] duration metric: took 6.427887ms waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.215033   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.215092   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:16:44.215099   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.215106   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.215112   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.217028   29917 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0907 00:16:44.217041   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.217047   29917 round_trippers.go:580]     Audit-Id: efd4d221-87d8-4adb-84b4-c5d94c4d8e8c
	I0907 00:16:44.217054   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.217059   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.217065   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.217072   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.217078   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.217465   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"880","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0907 00:16:44.217850   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:44.217861   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.217868   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.217874   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.220376   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.220390   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.220396   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.220402   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.220415   29917 round_trippers.go:580]     Audit-Id: 80d048d8-2921-4b71-af7c-366f4ffb2a87
	I0907 00:16:44.220425   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.220434   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.220465   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.220658   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:44.220933   29917 pod_ready.go:92] pod "kube-apiserver-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.220946   29917 pod_ready.go:81] duration metric: took 5.902492ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.220953   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.220991   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:16:44.220998   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.221004   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.221010   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.223496   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.223509   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.223515   29917 round_trippers.go:580]     Audit-Id: 20b65874-2df9-4815-8d3a-ad8e47772df3
	I0907 00:16:44.223521   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.223527   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.223535   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.223544   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.223552   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.223774   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"889","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0907 00:16:44.224253   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:44.224268   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.224279   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.224290   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.226430   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.226445   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.226456   29917 round_trippers.go:580]     Audit-Id: c814b792-a8f6-4283-8b37-fd30df06734a
	I0907 00:16:44.226464   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.226473   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.226485   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.226497   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.226506   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.226678   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:44.226993   29917 pod_ready.go:92] pod "kube-controller-manager-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.227005   29917 pod_ready.go:81] duration metric: took 6.047225ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.227013   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.377276   29917 request.go:629] Waited for 150.195806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:16:44.377334   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:16:44.377344   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.377356   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.377377   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.380436   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:16:44.380456   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.380465   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.380473   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.380481   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.380491   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.380508   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.380517   29917 round_trippers.go:580]     Audit-Id: 60c1a29a-e117-459b-a5e9-38c15fdee4ed
	I0907 00:16:44.380654   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wswp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d99412b-fc2d-4fce-a7e2-80da3e220e07","resourceVersion":"1034","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0907 00:16:44.577411   29917 request.go:629] Waited for 196.361251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:16:44.577475   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:16:44.577480   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.577487   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.577494   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.581588   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:16:44.581607   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.581613   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.581618   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.581627   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.581640   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.581649   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.581659   29917 round_trippers.go:580]     Audit-Id: 7ae2987e-f394-44a7-87fe-1e7480f0174e
	I0907 00:16:44.581922   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"15a4f37e-37a6-46f1-a8e3-c2ab0e788ddf","resourceVersion":"1059","creationTimestamp":"2023-09-07T00:16:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0907 00:16:44.582160   29917 pod_ready.go:92] pod "kube-proxy-2wswp" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.582172   29917 pod_ready.go:81] duration metric: took 355.154272ms waiting for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.582180   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.777590   29917 request.go:629] Waited for 195.349801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:16:44.777649   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:16:44.777654   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.777661   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.777669   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.781508   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:16:44.781533   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.781544   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.781554   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.781563   29917 round_trippers.go:580]     Audit-Id: 14396712-f4ae-4def-b4da-dae31c71ca4c
	I0907 00:16:44.781572   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.781579   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.781587   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.781731   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"735","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0907 00:16:44.977513   29917 request.go:629] Waited for 195.374446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:16:44.977572   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:16:44.977579   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:44.977593   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:44.977603   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:44.980504   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:44.980524   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:44.980531   29917 round_trippers.go:580]     Audit-Id: b9bcb101-87aa-4fd1-a08f-d493ee058ad9
	I0907 00:16:44.980537   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:44.980544   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:44.980552   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:44.980560   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:44.980568   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:44 GMT
	I0907 00:16:44.980777   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"92bc42a5-722e-482e-9f35-19fa4d9a6485","resourceVersion":"903","creationTimestamp":"2023-09-07T00:06:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0907 00:16:44.981060   29917 pod_ready.go:92] pod "kube-proxy-dlt4x" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:44.981074   29917 pod_ready.go:81] duration metric: took 398.889087ms waiting for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:44.981083   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:45.177458   29917 request.go:629] Waited for 196.313327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:16:45.177527   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:16:45.177538   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:45.177553   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:45.177578   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:45.184815   29917 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0907 00:16:45.184843   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:45.184853   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:45.184861   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:45 GMT
	I0907 00:16:45.184869   29917 round_trippers.go:580]     Audit-Id: 28a3e38b-e59a-4430-8037-1fb4de55c9b3
	I0907 00:16:45.184877   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:45.184886   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:45.184895   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:45.185237   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"846","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:16:45.376989   29917 request.go:629] Waited for 191.224835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:45.377062   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:45.377073   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:45.377084   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:45.377098   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:45.380006   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:16:45.380023   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:45.380030   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:45.380036   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:45 GMT
	I0907 00:16:45.380043   29917 round_trippers.go:580]     Audit-Id: b597b505-187a-4977-81e3-29fe52b770b0
	I0907 00:16:45.380052   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:45.380062   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:45.380069   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:45.380287   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:45.380637   29917 pod_ready.go:92] pod "kube-proxy-tbzlv" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:45.380654   29917 pod_ready.go:81] duration metric: took 399.565967ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:45.380667   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:45.577094   29917 request.go:629] Waited for 196.351859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:16:45.577144   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:16:45.577149   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:45.577157   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:45.577163   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:45.580703   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:16:45.580723   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:45.580733   29917 round_trippers.go:580]     Audit-Id: f8ff9804-b1bb-45a3-8f39-9e26dca1bd53
	I0907 00:16:45.580742   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:45.580749   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:45.580757   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:45.580765   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:45.580774   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:45 GMT
	I0907 00:16:45.580946   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"881","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0907 00:16:45.776621   29917 request.go:629] Waited for 195.297668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:45.776693   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:16:45.776700   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:45.776710   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:45.776720   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:45.784553   29917 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0907 00:16:45.784591   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:45.784607   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:45.784617   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:45 GMT
	I0907 00:16:45.784626   29917 round_trippers.go:580]     Audit-Id: c9ae077b-283c-4d22-b63e-5c4b19391e16
	I0907 00:16:45.784633   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:45.784639   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:45.784645   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:45.784807   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:16:45.785137   29917 pod_ready.go:92] pod "kube-scheduler-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:16:45.785151   29917 pod_ready.go:81] duration metric: took 404.477102ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:16:45.785161   29917 pod_ready.go:38] duration metric: took 1.603159869s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:16:45.785178   29917 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:16:45.785223   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:16:45.799062   29917 system_svc.go:56] duration metric: took 13.883233ms WaitForService to wait for kubelet.
	I0907 00:16:45.799081   29917 kubeadm.go:581] duration metric: took 1.643291496s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:16:45.799096   29917 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:16:45.977624   29917 request.go:629] Waited for 178.352829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I0907 00:16:45.977670   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:16:45.977675   29917 round_trippers.go:469] Request Headers:
	I0907 00:16:45.977683   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:16:45.977690   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:16:45.980837   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:16:45.980863   29917 round_trippers.go:577] Response Headers:
	I0907 00:16:45.980874   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:16:45.980883   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:16:45 GMT
	I0907 00:16:45.980891   29917 round_trippers.go:580]     Audit-Id: 8b93c972-5e3f-4588-8b7c-6f02384f2296
	I0907 00:16:45.980900   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:16:45.980908   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:16:45.980918   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:16:45.981423   29917 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1072"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15105 chars]
	I0907 00:16:45.982021   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:16:45.982041   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:16:45.982051   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:16:45.982055   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:16:45.982059   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:16:45.982062   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:16:45.982066   29917 node_conditions.go:105] duration metric: took 182.966475ms to run NodePressure ...
	I0907 00:16:45.982075   29917 start.go:228] waiting for startup goroutines ...
	I0907 00:16:45.982093   29917 start.go:242] writing updated cluster config ...
	I0907 00:16:45.982514   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:16:45.982598   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:16:45.985835   29917 out.go:177] * Starting worker node multinode-816061-m03 in cluster multinode-816061
	I0907 00:16:45.987204   29917 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:16:45.987224   29917 cache.go:57] Caching tarball of preloaded images
	I0907 00:16:45.987319   29917 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:16:45.987331   29917 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:16:45.987421   29917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/config.json ...
	I0907 00:16:45.987569   29917 start.go:365] acquiring machines lock for multinode-816061-m03: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:16:45.987607   29917 start.go:369] acquired machines lock for "multinode-816061-m03" in 21.705µs
	I0907 00:16:45.987617   29917 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:16:45.987624   29917 fix.go:54] fixHost starting: m03
	I0907 00:16:45.987854   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:16:45.987874   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:16:46.003234   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0907 00:16:46.003590   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:16:46.003998   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:16:46.004021   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:16:46.004266   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:16:46.004451   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:16:46.004575   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetState
	I0907 00:16:46.006076   29917 fix.go:102] recreateIfNeeded on multinode-816061-m03: state=Running err=<nil>
	W0907 00:16:46.006094   29917 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:16:46.008009   29917 out.go:177] * Updating the running kvm2 "multinode-816061-m03" VM ...
	I0907 00:16:46.009308   29917 machine.go:88] provisioning docker machine ...
	I0907 00:16:46.009325   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:16:46.009529   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetMachineName
	I0907 00:16:46.009666   29917 buildroot.go:166] provisioning hostname "multinode-816061-m03"
	I0907 00:16:46.009686   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetMachineName
	I0907 00:16:46.009809   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:16:46.012062   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.012452   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.012484   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.012617   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:16:46.012769   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.012890   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.013007   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:16:46.013172   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:16:46.013534   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0907 00:16:46.013547   29917 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-816061-m03 && echo "multinode-816061-m03" | sudo tee /etc/hostname
	I0907 00:16:46.144508   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-816061-m03
	
	I0907 00:16:46.144543   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:16:46.147312   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.147724   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.147747   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.147934   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:16:46.148117   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.148250   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.148370   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:16:46.148534   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:16:46.149157   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0907 00:16:46.149185   29917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-816061-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-816061-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-816061-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:16:46.259594   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:16:46.259622   29917 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:16:46.259642   29917 buildroot.go:174] setting up certificates
	I0907 00:16:46.259653   29917 provision.go:83] configureAuth start
	I0907 00:16:46.259663   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetMachineName
	I0907 00:16:46.259885   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetIP
	I0907 00:16:46.262548   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.262823   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.262849   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.263001   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:16:46.265176   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.265519   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.265560   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.265707   29917 provision.go:138] copyHostCerts
	I0907 00:16:46.265737   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:16:46.265761   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:16:46.265769   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:16:46.265835   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:16:46.265902   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:16:46.265919   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:16:46.265926   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:16:46.265948   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:16:46.265987   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:16:46.266002   29917 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:16:46.266008   29917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:16:46.266027   29917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:16:46.266082   29917 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.multinode-816061-m03 san=[192.168.39.153 192.168.39.153 localhost 127.0.0.1 minikube multinode-816061-m03]
	I0907 00:16:46.460670   29917 provision.go:172] copyRemoteCerts
	I0907 00:16:46.460719   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:16:46.460746   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:16:46.463382   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.463726   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.463755   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.463936   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:16:46.464158   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.464303   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:16:46.464450   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m03/id_rsa Username:docker}
	I0907 00:16:46.548454   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0907 00:16:46.548520   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:16:46.572830   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0907 00:16:46.572897   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0907 00:16:46.596596   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0907 00:16:46.596661   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:16:46.620666   29917 provision.go:86] duration metric: configureAuth took 360.999432ms
	I0907 00:16:46.620696   29917 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:16:46.620957   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:16:46.621026   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:16:46.623438   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.623894   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:16:46.623933   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:16:46.624092   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:16:46.624287   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.624457   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:16:46.624610   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:16:46.624773   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:16:46.625341   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0907 00:16:46.625366   29917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:18:17.337317   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:18:17.337348   29917 machine.go:91] provisioned docker machine in 1m31.328027156s
	I0907 00:18:17.337360   29917 start.go:300] post-start starting for "multinode-816061-m03" (driver="kvm2")
	I0907 00:18:17.337372   29917 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:18:17.337391   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:18:17.337703   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:18:17.337729   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:18:17.340428   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.340802   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:17.340833   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.341002   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:18:17.341192   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:18:17.341373   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:18:17.341555   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m03/id_rsa Username:docker}
	I0907 00:18:17.437913   29917 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:18:17.444889   29917 command_runner.go:130] > NAME=Buildroot
	I0907 00:18:17.444920   29917 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0907 00:18:17.444927   29917 command_runner.go:130] > ID=buildroot
	I0907 00:18:17.444935   29917 command_runner.go:130] > VERSION_ID=2021.02.12
	I0907 00:18:17.444944   29917 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0907 00:18:17.445294   29917 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:18:17.445317   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:18:17.445388   29917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:18:17.445480   29917 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:18:17.445493   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /etc/ssl/certs/136572.pem
	I0907 00:18:17.445589   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:18:17.487732   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:18:17.516420   29917 start.go:303] post-start completed in 179.045402ms
	I0907 00:18:17.516446   29917 fix.go:56] fixHost completed within 1m31.528820105s
	I0907 00:18:17.516470   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:18:17.519165   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.519579   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:17.519605   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.519789   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:18:17.519984   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:18:17.520159   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:18:17.520328   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:18:17.520498   29917 main.go:141] libmachine: Using SSH client type: native
	I0907 00:18:17.520985   29917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0907 00:18:17.521007   29917 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:18:17.632101   29917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694045897.626430290
	
	I0907 00:18:17.632117   29917 fix.go:206] guest clock: 1694045897.626430290
	I0907 00:18:17.632124   29917 fix.go:219] Guest: 2023-09-07 00:18:17.62643029 +0000 UTC Remote: 2023-09-07 00:18:17.516450169 +0000 UTC m=+555.333831571 (delta=109.980121ms)
	I0907 00:18:17.632137   29917 fix.go:190] guest clock delta is within tolerance: 109.980121ms
	I0907 00:18:17.632141   29917 start.go:83] releasing machines lock for "multinode-816061-m03", held for 1m31.644528205s
	I0907 00:18:17.632157   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:18:17.632483   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetIP
	I0907 00:18:17.635008   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.635305   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:17.635356   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.637131   29917 out.go:177] * Found network options:
	I0907 00:18:17.638766   29917 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.44
	W0907 00:18:17.640209   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	W0907 00:18:17.640234   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:18:17.640251   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:18:17.640788   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:18:17.640977   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .DriverName
	I0907 00:18:17.641077   29917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:18:17.641113   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	W0907 00:18:17.641211   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	W0907 00:18:17.641236   29917 proxy.go:119] fail to check proxy env: Error ip not in block
	I0907 00:18:17.641300   29917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:18:17.641323   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHHostname
	I0907 00:18:17.643708   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.644084   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:17.644118   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.644146   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.644259   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:18:17.644432   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:18:17.644565   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:18:17.644622   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:17.644649   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:17.644728   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m03/id_rsa Username:docker}
	I0907 00:18:17.644839   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHPort
	I0907 00:18:17.644994   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHKeyPath
	I0907 00:18:17.645134   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetSSHUsername
	I0907 00:18:17.645261   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m03/id_rsa Username:docker}
	I0907 00:18:17.772127   29917 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0907 00:18:17.890749   29917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:18:17.897682   29917 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0907 00:18:17.897721   29917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:18:17.897781   29917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:18:17.906441   29917 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:18:17.906462   29917 start.go:466] detecting cgroup driver to use...
	I0907 00:18:17.906513   29917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:18:17.921077   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:18:17.934499   29917 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:18:17.934548   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:18:17.958296   29917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:18:17.971807   29917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:18:18.128134   29917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:18:18.251721   29917 docker.go:212] disabling docker service ...
	I0907 00:18:18.251785   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:18:18.265696   29917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:18:18.278077   29917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:18:18.396177   29917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:18:18.514362   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:18:18.526723   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:18:18.543850   29917 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0907 00:18:18.544090   29917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:18:18.544150   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:18:18.555054   29917 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:18:18.555134   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:18:18.564313   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:18:18.573532   29917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:18:18.582794   29917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:18:18.591993   29917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:18:18.599888   29917 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0907 00:18:18.599956   29917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:18:18.608488   29917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:18:18.725744   29917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:18:20.821070   29917 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.095295386s)
	I0907 00:18:20.821099   29917 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:18:20.821151   29917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:18:20.829698   29917 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0907 00:18:20.829723   29917 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0907 00:18:20.829731   29917 command_runner.go:130] > Device: 16h/22d	Inode: 1224        Links: 1
	I0907 00:18:20.829741   29917 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:18:20.829750   29917 command_runner.go:130] > Access: 2023-09-07 00:18:20.721827445 +0000
	I0907 00:18:20.829759   29917 command_runner.go:130] > Modify: 2023-09-07 00:18:20.721827445 +0000
	I0907 00:18:20.829772   29917 command_runner.go:130] > Change: 2023-09-07 00:18:20.721827445 +0000
	I0907 00:18:20.829783   29917 command_runner.go:130] >  Birth: -
	I0907 00:18:20.830055   29917 start.go:534] Will wait 60s for crictl version
	I0907 00:18:20.830128   29917 ssh_runner.go:195] Run: which crictl
	I0907 00:18:20.835056   29917 command_runner.go:130] > /usr/bin/crictl
	I0907 00:18:20.835144   29917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:18:20.879256   29917 command_runner.go:130] > Version:  0.1.0
	I0907 00:18:20.879281   29917 command_runner.go:130] > RuntimeName:  cri-o
	I0907 00:18:20.879287   29917 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0907 00:18:20.879293   29917 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0907 00:18:20.879308   29917 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:18:20.879376   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:18:20.929954   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:18:20.929972   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:18:20.929979   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:18:20.929984   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:18:20.929989   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:18:20.929994   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:18:20.929997   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:18:20.930004   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:18:20.930018   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:18:20.930031   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:18:20.930038   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:18:20.930045   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:18:20.931484   29917 ssh_runner.go:195] Run: crio --version
	I0907 00:18:20.978150   29917 command_runner.go:130] > crio version 1.24.1
	I0907 00:18:20.978173   29917 command_runner.go:130] > Version:          1.24.1
	I0907 00:18:20.978180   29917 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0907 00:18:20.978184   29917 command_runner.go:130] > GitTreeState:     dirty
	I0907 00:18:20.978189   29917 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0907 00:18:20.978194   29917 command_runner.go:130] > GoVersion:        go1.19.9
	I0907 00:18:20.978198   29917 command_runner.go:130] > Compiler:         gc
	I0907 00:18:20.978202   29917 command_runner.go:130] > Platform:         linux/amd64
	I0907 00:18:20.978207   29917 command_runner.go:130] > Linkmode:         dynamic
	I0907 00:18:20.978214   29917 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0907 00:18:20.978217   29917 command_runner.go:130] > SeccompEnabled:   true
	I0907 00:18:20.978221   29917 command_runner.go:130] > AppArmorEnabled:  false
	I0907 00:18:20.981759   29917 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:18:20.983203   29917 out.go:177]   - env NO_PROXY=192.168.39.212
	I0907 00:18:20.984619   29917 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.44
	I0907 00:18:20.985889   29917 main.go:141] libmachine: (multinode-816061-m03) Calling .GetIP
	I0907 00:18:20.988491   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:20.988848   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:b9:fc", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:06:39 +0000 UTC Type:0 Mac:52:54:00:3a:b9:fc Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-816061-m03 Clientid:01:52:54:00:3a:b9:fc}
	I0907 00:18:20.988875   29917 main.go:141] libmachine: (multinode-816061-m03) DBG | domain multinode-816061-m03 has defined IP address 192.168.39.153 and MAC address 52:54:00:3a:b9:fc in network mk-multinode-816061
	I0907 00:18:20.989070   29917 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:18:20.993438   29917 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0907 00:18:20.993838   29917 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061 for IP: 192.168.39.153
	I0907 00:18:20.993864   29917 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:18:20.994022   29917 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:18:20.994060   29917 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:18:20.994069   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0907 00:18:20.994083   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0907 00:18:20.994095   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0907 00:18:20.994115   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0907 00:18:20.994167   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:18:20.994197   29917 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:18:20.994207   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:18:20.994231   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:18:20.994254   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:18:20.994277   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:18:20.994313   29917 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:18:20.994340   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem -> /usr/share/ca-certificates/13657.pem
	I0907 00:18:20.994352   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> /usr/share/ca-certificates/136572.pem
	I0907 00:18:20.994364   29917 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:18:20.994711   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:18:21.019972   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:18:21.044801   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:18:21.068776   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:18:21.093009   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:18:21.116768   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:18:21.141397   29917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:18:21.165442   29917 ssh_runner.go:195] Run: openssl version
	I0907 00:18:21.171323   29917 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0907 00:18:21.171689   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:18:21.181821   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:18:21.186453   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:18:21.186737   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:18:21.186803   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:18:21.192347   29917 command_runner.go:130] > 51391683
	I0907 00:18:21.192653   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:18:21.201050   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:18:21.211432   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:18:21.216196   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:18:21.216225   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:18:21.216269   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:18:21.222091   29917 command_runner.go:130] > 3ec20f2e
	I0907 00:18:21.222166   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:18:21.231094   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:18:21.241357   29917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:18:21.246161   29917 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:18:21.246401   29917 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:18:21.246448   29917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:18:21.252052   29917 command_runner.go:130] > b5213941
	I0907 00:18:21.252169   29917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:18:21.260723   29917 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:18:21.264858   29917 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:18:21.265041   29917 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 00:18:21.265159   29917 ssh_runner.go:195] Run: crio config
	I0907 00:18:21.316250   29917 command_runner.go:130] ! time="2023-09-07 00:18:21.310658738Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0907 00:18:21.316311   29917 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0907 00:18:21.333782   29917 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0907 00:18:21.333812   29917 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0907 00:18:21.333823   29917 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0907 00:18:21.333829   29917 command_runner.go:130] > #
	I0907 00:18:21.333839   29917 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0907 00:18:21.333848   29917 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0907 00:18:21.333859   29917 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0907 00:18:21.333876   29917 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0907 00:18:21.333883   29917 command_runner.go:130] > # reload'.
	I0907 00:18:21.333896   29917 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0907 00:18:21.333908   29917 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0907 00:18:21.333922   29917 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0907 00:18:21.333934   29917 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0907 00:18:21.333943   29917 command_runner.go:130] > [crio]
	I0907 00:18:21.333952   29917 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0907 00:18:21.333963   29917 command_runner.go:130] > # containers images, in this directory.
	I0907 00:18:21.333971   29917 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0907 00:18:21.333979   29917 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0907 00:18:21.333986   29917 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0907 00:18:21.333992   29917 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0907 00:18:21.334000   29917 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0907 00:18:21.334005   29917 command_runner.go:130] > storage_driver = "overlay"
	I0907 00:18:21.334011   29917 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0907 00:18:21.334016   29917 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0907 00:18:21.334021   29917 command_runner.go:130] > storage_option = [
	I0907 00:18:21.334027   29917 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0907 00:18:21.334031   29917 command_runner.go:130] > ]
	I0907 00:18:21.334042   29917 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0907 00:18:21.334050   29917 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0907 00:18:21.334055   29917 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0907 00:18:21.334063   29917 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0907 00:18:21.334069   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0907 00:18:21.334074   29917 command_runner.go:130] > # always happen on a node reboot
	I0907 00:18:21.334078   29917 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0907 00:18:21.334086   29917 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0907 00:18:21.334092   29917 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0907 00:18:21.334102   29917 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0907 00:18:21.334109   29917 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0907 00:18:21.334116   29917 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0907 00:18:21.334126   29917 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0907 00:18:21.334132   29917 command_runner.go:130] > # internal_wipe = true
	I0907 00:18:21.334138   29917 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0907 00:18:21.334147   29917 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0907 00:18:21.334153   29917 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0907 00:18:21.334160   29917 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0907 00:18:21.334166   29917 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0907 00:18:21.334172   29917 command_runner.go:130] > [crio.api]
	I0907 00:18:21.334177   29917 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0907 00:18:21.334184   29917 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0907 00:18:21.334189   29917 command_runner.go:130] > # IP address on which the stream server will listen.
	I0907 00:18:21.334196   29917 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0907 00:18:21.334203   29917 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0907 00:18:21.334210   29917 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0907 00:18:21.334214   29917 command_runner.go:130] > # stream_port = "0"
	I0907 00:18:21.334222   29917 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0907 00:18:21.334226   29917 command_runner.go:130] > # stream_enable_tls = false
	I0907 00:18:21.334235   29917 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0907 00:18:21.334239   29917 command_runner.go:130] > # stream_idle_timeout = ""
	I0907 00:18:21.334246   29917 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0907 00:18:21.334254   29917 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0907 00:18:21.334257   29917 command_runner.go:130] > # minutes.
	I0907 00:18:21.334261   29917 command_runner.go:130] > # stream_tls_cert = ""
	I0907 00:18:21.334269   29917 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0907 00:18:21.334275   29917 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0907 00:18:21.334281   29917 command_runner.go:130] > # stream_tls_key = ""
	I0907 00:18:21.334287   29917 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0907 00:18:21.334293   29917 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0907 00:18:21.334298   29917 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0907 00:18:21.334304   29917 command_runner.go:130] > # stream_tls_ca = ""
	I0907 00:18:21.334311   29917 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:18:21.334318   29917 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0907 00:18:21.334325   29917 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0907 00:18:21.334331   29917 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0907 00:18:21.334349   29917 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0907 00:18:21.334361   29917 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0907 00:18:21.334370   29917 command_runner.go:130] > [crio.runtime]
	I0907 00:18:21.334380   29917 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0907 00:18:21.334391   29917 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0907 00:18:21.334401   29917 command_runner.go:130] > # "nofile=1024:2048"
	I0907 00:18:21.334411   29917 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0907 00:18:21.334421   29917 command_runner.go:130] > # default_ulimits = [
	I0907 00:18:21.334426   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334435   29917 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0907 00:18:21.334444   29917 command_runner.go:130] > # no_pivot = false
	I0907 00:18:21.334450   29917 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0907 00:18:21.334458   29917 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0907 00:18:21.334463   29917 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0907 00:18:21.334471   29917 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0907 00:18:21.334476   29917 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0907 00:18:21.334487   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:18:21.334494   29917 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0907 00:18:21.334499   29917 command_runner.go:130] > # Cgroup setting for conmon
	I0907 00:18:21.334508   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0907 00:18:21.334513   29917 command_runner.go:130] > conmon_cgroup = "pod"
	I0907 00:18:21.334518   29917 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0907 00:18:21.334526   29917 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0907 00:18:21.334532   29917 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0907 00:18:21.334541   29917 command_runner.go:130] > conmon_env = [
	I0907 00:18:21.334551   29917 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0907 00:18:21.334557   29917 command_runner.go:130] > ]
	I0907 00:18:21.334562   29917 command_runner.go:130] > # Additional environment variables to set for all the
	I0907 00:18:21.334569   29917 command_runner.go:130] > # containers. These are overridden if set in the
	I0907 00:18:21.334575   29917 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0907 00:18:21.334581   29917 command_runner.go:130] > # default_env = [
	I0907 00:18:21.334584   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334590   29917 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0907 00:18:21.334596   29917 command_runner.go:130] > # selinux = false
	I0907 00:18:21.334603   29917 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0907 00:18:21.334609   29917 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0907 00:18:21.334616   29917 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0907 00:18:21.334620   29917 command_runner.go:130] > # seccomp_profile = ""
	I0907 00:18:21.334628   29917 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0907 00:18:21.334634   29917 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0907 00:18:21.334640   29917 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0907 00:18:21.334647   29917 command_runner.go:130] > # which might increase security.
	I0907 00:18:21.334652   29917 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0907 00:18:21.334660   29917 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0907 00:18:21.334666   29917 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0907 00:18:21.334672   29917 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0907 00:18:21.334678   29917 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0907 00:18:21.334684   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:18:21.334688   29917 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0907 00:18:21.334693   29917 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0907 00:18:21.334698   29917 command_runner.go:130] > # the cgroup blockio controller.
	I0907 00:18:21.334702   29917 command_runner.go:130] > # blockio_config_file = ""
	I0907 00:18:21.334709   29917 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0907 00:18:21.334715   29917 command_runner.go:130] > # irqbalance daemon.
	I0907 00:18:21.334720   29917 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0907 00:18:21.334729   29917 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0907 00:18:21.334734   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:18:21.334740   29917 command_runner.go:130] > # rdt_config_file = ""
	I0907 00:18:21.334745   29917 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0907 00:18:21.334752   29917 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0907 00:18:21.334757   29917 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0907 00:18:21.334763   29917 command_runner.go:130] > # separate_pull_cgroup = ""
	I0907 00:18:21.334769   29917 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0907 00:18:21.334794   29917 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0907 00:18:21.334802   29917 command_runner.go:130] > # will be added.
	I0907 00:18:21.334807   29917 command_runner.go:130] > # default_capabilities = [
	I0907 00:18:21.334813   29917 command_runner.go:130] > # 	"CHOWN",
	I0907 00:18:21.334817   29917 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0907 00:18:21.334824   29917 command_runner.go:130] > # 	"FSETID",
	I0907 00:18:21.334827   29917 command_runner.go:130] > # 	"FOWNER",
	I0907 00:18:21.334831   29917 command_runner.go:130] > # 	"SETGID",
	I0907 00:18:21.334835   29917 command_runner.go:130] > # 	"SETUID",
	I0907 00:18:21.334839   29917 command_runner.go:130] > # 	"SETPCAP",
	I0907 00:18:21.334843   29917 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0907 00:18:21.334847   29917 command_runner.go:130] > # 	"KILL",
	I0907 00:18:21.334851   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334857   29917 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0907 00:18:21.334865   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:18:21.334872   29917 command_runner.go:130] > # default_sysctls = [
	I0907 00:18:21.334875   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334880   29917 command_runner.go:130] > # List of devices on the host that a
	I0907 00:18:21.334886   29917 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0907 00:18:21.334892   29917 command_runner.go:130] > # allowed_devices = [
	I0907 00:18:21.334898   29917 command_runner.go:130] > # 	"/dev/fuse",
	I0907 00:18:21.334901   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334906   29917 command_runner.go:130] > # List of additional devices. specified as
	I0907 00:18:21.334941   29917 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0907 00:18:21.334952   29917 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0907 00:18:21.334972   29917 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0907 00:18:21.334979   29917 command_runner.go:130] > # additional_devices = [
	I0907 00:18:21.334982   29917 command_runner.go:130] > # ]
	I0907 00:18:21.334988   29917 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0907 00:18:21.334994   29917 command_runner.go:130] > # cdi_spec_dirs = [
	I0907 00:18:21.334998   29917 command_runner.go:130] > # 	"/etc/cdi",
	I0907 00:18:21.335004   29917 command_runner.go:130] > # 	"/var/run/cdi",
	I0907 00:18:21.335008   29917 command_runner.go:130] > # ]
	I0907 00:18:21.335014   29917 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0907 00:18:21.335022   29917 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0907 00:18:21.335042   29917 command_runner.go:130] > # Defaults to false.
	I0907 00:18:21.335052   29917 command_runner.go:130] > # device_ownership_from_security_context = false
	I0907 00:18:21.335058   29917 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0907 00:18:21.335066   29917 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0907 00:18:21.335071   29917 command_runner.go:130] > # hooks_dir = [
	I0907 00:18:21.335078   29917 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0907 00:18:21.335082   29917 command_runner.go:130] > # ]
	I0907 00:18:21.335090   29917 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0907 00:18:21.335097   29917 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0907 00:18:21.335104   29917 command_runner.go:130] > # its default mounts from the following two files:
	I0907 00:18:21.335107   29917 command_runner.go:130] > #
	I0907 00:18:21.335113   29917 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0907 00:18:21.335120   29917 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0907 00:18:21.335128   29917 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0907 00:18:21.335133   29917 command_runner.go:130] > #
	I0907 00:18:21.335140   29917 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0907 00:18:21.335146   29917 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0907 00:18:21.335155   29917 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0907 00:18:21.335160   29917 command_runner.go:130] > #      only add mounts it finds in this file.
	I0907 00:18:21.335165   29917 command_runner.go:130] > #
	I0907 00:18:21.335169   29917 command_runner.go:130] > # default_mounts_file = ""
	I0907 00:18:21.335176   29917 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0907 00:18:21.335183   29917 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0907 00:18:21.335189   29917 command_runner.go:130] > pids_limit = 1024
	I0907 00:18:21.335195   29917 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0907 00:18:21.335203   29917 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0907 00:18:21.335209   29917 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0907 00:18:21.335219   29917 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0907 00:18:21.335222   29917 command_runner.go:130] > # log_size_max = -1
	I0907 00:18:21.335229   29917 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0907 00:18:21.335235   29917 command_runner.go:130] > # log_to_journald = false
	I0907 00:18:21.335241   29917 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0907 00:18:21.335248   29917 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0907 00:18:21.335253   29917 command_runner.go:130] > # Path to directory for container attach sockets.
	I0907 00:18:21.335260   29917 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0907 00:18:21.335265   29917 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0907 00:18:21.335271   29917 command_runner.go:130] > # bind_mount_prefix = ""
	I0907 00:18:21.335276   29917 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0907 00:18:21.335281   29917 command_runner.go:130] > # read_only = false
	I0907 00:18:21.335287   29917 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0907 00:18:21.335295   29917 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0907 00:18:21.335299   29917 command_runner.go:130] > # live configuration reload.
	I0907 00:18:21.335305   29917 command_runner.go:130] > # log_level = "info"
	I0907 00:18:21.335311   29917 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0907 00:18:21.335318   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:18:21.335321   29917 command_runner.go:130] > # log_filter = ""
	I0907 00:18:21.335329   29917 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0907 00:18:21.335334   29917 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0907 00:18:21.335341   29917 command_runner.go:130] > # separated by comma.
	I0907 00:18:21.335346   29917 command_runner.go:130] > # uid_mappings = ""
	I0907 00:18:21.335358   29917 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0907 00:18:21.335373   29917 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0907 00:18:21.335383   29917 command_runner.go:130] > # separated by comma.
	I0907 00:18:21.335389   29917 command_runner.go:130] > # gid_mappings = ""
	I0907 00:18:21.335402   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0907 00:18:21.335415   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:18:21.335428   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:18:21.335438   29917 command_runner.go:130] > # minimum_mappable_uid = -1
	I0907 00:18:21.335450   29917 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0907 00:18:21.335462   29917 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0907 00:18:21.335474   29917 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0907 00:18:21.335481   29917 command_runner.go:130] > # minimum_mappable_gid = -1
	I0907 00:18:21.335491   29917 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0907 00:18:21.335498   29917 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0907 00:18:21.335506   29917 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0907 00:18:21.335510   29917 command_runner.go:130] > # ctr_stop_timeout = 30
	I0907 00:18:21.335516   29917 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0907 00:18:21.335524   29917 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0907 00:18:21.335529   29917 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0907 00:18:21.335534   29917 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0907 00:18:21.335542   29917 command_runner.go:130] > drop_infra_ctr = false
	I0907 00:18:21.335551   29917 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0907 00:18:21.335558   29917 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0907 00:18:21.335568   29917 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0907 00:18:21.335573   29917 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0907 00:18:21.335580   29917 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0907 00:18:21.335587   29917 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0907 00:18:21.335592   29917 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0907 00:18:21.335599   29917 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0907 00:18:21.335604   29917 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0907 00:18:21.335610   29917 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0907 00:18:21.335619   29917 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0907 00:18:21.335625   29917 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0907 00:18:21.335632   29917 command_runner.go:130] > # default_runtime = "runc"
	I0907 00:18:21.335639   29917 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0907 00:18:21.335646   29917 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0907 00:18:21.335656   29917 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0907 00:18:21.335663   29917 command_runner.go:130] > # creation as a file is not desired either.
	I0907 00:18:21.335672   29917 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0907 00:18:21.335679   29917 command_runner.go:130] > # the hostname is being managed dynamically.
	I0907 00:18:21.335683   29917 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0907 00:18:21.335687   29917 command_runner.go:130] > # ]
	I0907 00:18:21.335693   29917 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0907 00:18:21.335701   29917 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0907 00:18:21.335709   29917 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0907 00:18:21.335715   29917 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0907 00:18:21.335721   29917 command_runner.go:130] > #
	I0907 00:18:21.335725   29917 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0907 00:18:21.335730   29917 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0907 00:18:21.335736   29917 command_runner.go:130] > #  runtime_type = "oci"
	I0907 00:18:21.335740   29917 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0907 00:18:21.335746   29917 command_runner.go:130] > #  privileged_without_host_devices = false
	I0907 00:18:21.335750   29917 command_runner.go:130] > #  allowed_annotations = []
	I0907 00:18:21.335756   29917 command_runner.go:130] > # Where:
	I0907 00:18:21.335761   29917 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0907 00:18:21.335769   29917 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0907 00:18:21.335775   29917 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0907 00:18:21.335783   29917 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0907 00:18:21.335789   29917 command_runner.go:130] > #   in $PATH.
	I0907 00:18:21.335795   29917 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0907 00:18:21.335800   29917 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0907 00:18:21.335806   29917 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0907 00:18:21.335813   29917 command_runner.go:130] > #   state.
	I0907 00:18:21.335819   29917 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0907 00:18:21.335824   29917 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0907 00:18:21.335832   29917 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0907 00:18:21.335838   29917 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0907 00:18:21.335845   29917 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0907 00:18:21.335852   29917 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0907 00:18:21.335859   29917 command_runner.go:130] > #   The currently recognized values are:
	I0907 00:18:21.335865   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0907 00:18:21.335874   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0907 00:18:21.335883   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0907 00:18:21.335889   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0907 00:18:21.335899   29917 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0907 00:18:21.335905   29917 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0907 00:18:21.335913   29917 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0907 00:18:21.335919   29917 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0907 00:18:21.335926   29917 command_runner.go:130] > #   should be moved to the container's cgroup
	I0907 00:18:21.335931   29917 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0907 00:18:21.335937   29917 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0907 00:18:21.335941   29917 command_runner.go:130] > runtime_type = "oci"
	I0907 00:18:21.335947   29917 command_runner.go:130] > runtime_root = "/run/runc"
	I0907 00:18:21.335951   29917 command_runner.go:130] > runtime_config_path = ""
	I0907 00:18:21.335956   29917 command_runner.go:130] > monitor_path = ""
	I0907 00:18:21.335960   29917 command_runner.go:130] > monitor_cgroup = ""
	I0907 00:18:21.335966   29917 command_runner.go:130] > monitor_exec_cgroup = ""
	I0907 00:18:21.335972   29917 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0907 00:18:21.335978   29917 command_runner.go:130] > # running containers
	I0907 00:18:21.335982   29917 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0907 00:18:21.335988   29917 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0907 00:18:21.336016   29917 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0907 00:18:21.336023   29917 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0907 00:18:21.336029   29917 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0907 00:18:21.336039   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0907 00:18:21.336046   29917 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0907 00:18:21.336051   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0907 00:18:21.336058   29917 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0907 00:18:21.336063   29917 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0907 00:18:21.336071   29917 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0907 00:18:21.336077   29917 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0907 00:18:21.336083   29917 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0907 00:18:21.336092   29917 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0907 00:18:21.336102   29917 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0907 00:18:21.336108   29917 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0907 00:18:21.336116   29917 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0907 00:18:21.336126   29917 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0907 00:18:21.336131   29917 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0907 00:18:21.336143   29917 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0907 00:18:21.336149   29917 command_runner.go:130] > # Example:
	I0907 00:18:21.336154   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0907 00:18:21.336161   29917 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0907 00:18:21.336165   29917 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0907 00:18:21.336172   29917 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0907 00:18:21.336176   29917 command_runner.go:130] > # cpuset = 0
	I0907 00:18:21.336182   29917 command_runner.go:130] > # cpushares = "0-1"
	I0907 00:18:21.336186   29917 command_runner.go:130] > # Where:
	I0907 00:18:21.336190   29917 command_runner.go:130] > # The workload name is workload-type.
	I0907 00:18:21.336197   29917 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0907 00:18:21.336205   29917 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0907 00:18:21.336210   29917 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0907 00:18:21.336219   29917 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0907 00:18:21.336225   29917 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0907 00:18:21.336231   29917 command_runner.go:130] > # 
	I0907 00:18:21.336237   29917 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0907 00:18:21.336240   29917 command_runner.go:130] > #
	I0907 00:18:21.336246   29917 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0907 00:18:21.336253   29917 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0907 00:18:21.336259   29917 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0907 00:18:21.336267   29917 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0907 00:18:21.336274   29917 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0907 00:18:21.336280   29917 command_runner.go:130] > [crio.image]
	I0907 00:18:21.336286   29917 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0907 00:18:21.336293   29917 command_runner.go:130] > # default_transport = "docker://"
	I0907 00:18:21.336299   29917 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0907 00:18:21.336307   29917 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:18:21.336312   29917 command_runner.go:130] > # global_auth_file = ""
	I0907 00:18:21.336317   29917 command_runner.go:130] > # The image used to instantiate infra containers.
	I0907 00:18:21.336324   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:18:21.336328   29917 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0907 00:18:21.336336   29917 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0907 00:18:21.336344   29917 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0907 00:18:21.336351   29917 command_runner.go:130] > # This option supports live configuration reload.
	I0907 00:18:21.336362   29917 command_runner.go:130] > # pause_image_auth_file = ""
	I0907 00:18:21.336374   29917 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0907 00:18:21.336388   29917 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0907 00:18:21.336401   29917 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0907 00:18:21.336413   29917 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0907 00:18:21.336423   29917 command_runner.go:130] > # pause_command = "/pause"
	I0907 00:18:21.336433   29917 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0907 00:18:21.336446   29917 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0907 00:18:21.336459   29917 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0907 00:18:21.336472   29917 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0907 00:18:21.336482   29917 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0907 00:18:21.336487   29917 command_runner.go:130] > # signature_policy = ""
	I0907 00:18:21.336493   29917 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0907 00:18:21.336501   29917 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0907 00:18:21.336506   29917 command_runner.go:130] > # changing them here.
	I0907 00:18:21.336513   29917 command_runner.go:130] > # insecure_registries = [
	I0907 00:18:21.336516   29917 command_runner.go:130] > # ]
	I0907 00:18:21.336528   29917 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0907 00:18:21.336535   29917 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0907 00:18:21.336539   29917 command_runner.go:130] > # image_volumes = "mkdir"
	I0907 00:18:21.336545   29917 command_runner.go:130] > # Temporary directory to use for storing big files
	I0907 00:18:21.336550   29917 command_runner.go:130] > # big_files_temporary_dir = ""
	I0907 00:18:21.336558   29917 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0907 00:18:21.336562   29917 command_runner.go:130] > # CNI plugins.
	I0907 00:18:21.336568   29917 command_runner.go:130] > [crio.network]
	I0907 00:18:21.336574   29917 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0907 00:18:21.336579   29917 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0907 00:18:21.336584   29917 command_runner.go:130] > # cni_default_network = ""
	I0907 00:18:21.336590   29917 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0907 00:18:21.336596   29917 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0907 00:18:21.336602   29917 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0907 00:18:21.336608   29917 command_runner.go:130] > # plugin_dirs = [
	I0907 00:18:21.336611   29917 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0907 00:18:21.336615   29917 command_runner.go:130] > # ]
	I0907 00:18:21.336620   29917 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0907 00:18:21.336626   29917 command_runner.go:130] > [crio.metrics]
	I0907 00:18:21.336631   29917 command_runner.go:130] > # Globally enable or disable metrics support.
	I0907 00:18:21.336634   29917 command_runner.go:130] > enable_metrics = true
	I0907 00:18:21.336641   29917 command_runner.go:130] > # Specify enabled metrics collectors.
	I0907 00:18:21.336646   29917 command_runner.go:130] > # Per default all metrics are enabled.
	I0907 00:18:21.336652   29917 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0907 00:18:21.336660   29917 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0907 00:18:21.336666   29917 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0907 00:18:21.336671   29917 command_runner.go:130] > # metrics_collectors = [
	I0907 00:18:21.336675   29917 command_runner.go:130] > # 	"operations",
	I0907 00:18:21.336681   29917 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0907 00:18:21.336687   29917 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0907 00:18:21.336692   29917 command_runner.go:130] > # 	"operations_errors",
	I0907 00:18:21.336697   29917 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0907 00:18:21.336701   29917 command_runner.go:130] > # 	"image_pulls_by_name",
	I0907 00:18:21.336706   29917 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0907 00:18:21.336710   29917 command_runner.go:130] > # 	"image_pulls_failures",
	I0907 00:18:21.336715   29917 command_runner.go:130] > # 	"image_pulls_successes",
	I0907 00:18:21.336719   29917 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0907 00:18:21.336723   29917 command_runner.go:130] > # 	"image_layer_reuse",
	I0907 00:18:21.336728   29917 command_runner.go:130] > # 	"containers_oom_total",
	I0907 00:18:21.336734   29917 command_runner.go:130] > # 	"containers_oom",
	I0907 00:18:21.336738   29917 command_runner.go:130] > # 	"processes_defunct",
	I0907 00:18:21.336745   29917 command_runner.go:130] > # 	"operations_total",
	I0907 00:18:21.336749   29917 command_runner.go:130] > # 	"operations_latency_seconds",
	I0907 00:18:21.336753   29917 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0907 00:18:21.336758   29917 command_runner.go:130] > # 	"operations_errors_total",
	I0907 00:18:21.336763   29917 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0907 00:18:21.336770   29917 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0907 00:18:21.336775   29917 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0907 00:18:21.336781   29917 command_runner.go:130] > # 	"image_pulls_success_total",
	I0907 00:18:21.336785   29917 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0907 00:18:21.336791   29917 command_runner.go:130] > # 	"containers_oom_count_total",
	I0907 00:18:21.336795   29917 command_runner.go:130] > # ]
	I0907 00:18:21.336802   29917 command_runner.go:130] > # The port on which the metrics server will listen.
	I0907 00:18:21.336806   29917 command_runner.go:130] > # metrics_port = 9090
	I0907 00:18:21.336813   29917 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0907 00:18:21.336817   29917 command_runner.go:130] > # metrics_socket = ""
	I0907 00:18:21.336824   29917 command_runner.go:130] > # The certificate for the secure metrics server.
	I0907 00:18:21.336831   29917 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0907 00:18:21.336839   29917 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0907 00:18:21.336843   29917 command_runner.go:130] > # certificate on any modification event.
	I0907 00:18:21.336849   29917 command_runner.go:130] > # metrics_cert = ""
	I0907 00:18:21.336854   29917 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0907 00:18:21.336859   29917 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0907 00:18:21.336865   29917 command_runner.go:130] > # metrics_key = ""
	I0907 00:18:21.336871   29917 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0907 00:18:21.336877   29917 command_runner.go:130] > [crio.tracing]
	I0907 00:18:21.336882   29917 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0907 00:18:21.336888   29917 command_runner.go:130] > # enable_tracing = false
	I0907 00:18:21.336893   29917 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0907 00:18:21.336898   29917 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0907 00:18:21.336903   29917 command_runner.go:130] > # Number of samples to collect per million spans.
	I0907 00:18:21.336910   29917 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0907 00:18:21.336916   29917 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0907 00:18:21.336920   29917 command_runner.go:130] > [crio.stats]
	I0907 00:18:21.336927   29917 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0907 00:18:21.336939   29917 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0907 00:18:21.336946   29917 command_runner.go:130] > # stats_collection_period = 0
	I0907 00:18:21.337011   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:18:21.337019   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:18:21.337026   29917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:18:21.337048   29917 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.153 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-816061 NodeName:multinode-816061-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:18:21.337147   29917 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-816061-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:18:21.337211   29917 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-816061-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:18:21.337273   29917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:18:21.346526   29917 command_runner.go:130] > kubeadm
	I0907 00:18:21.346548   29917 command_runner.go:130] > kubectl
	I0907 00:18:21.346554   29917 command_runner.go:130] > kubelet
	I0907 00:18:21.346624   29917 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:18:21.346695   29917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0907 00:18:21.355776   29917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0907 00:18:21.372499   29917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:18:21.389040   29917 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0907 00:18:21.393112   29917 command_runner.go:130] > 192.168.39.212	control-plane.minikube.internal
	I0907 00:18:21.393183   29917 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:18:21.393550   29917 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:18:21.393623   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:18:21.393672   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:18:21.408726   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
	I0907 00:18:21.409117   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:18:21.409620   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:18:21.409649   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:18:21.410023   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:18:21.410234   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:18:21.410376   29917 start.go:301] JoinCluster: &{Name:multinode-816061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-816061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:18:21.410532   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0907 00:18:21.410556   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:18:21.413044   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:18:21.413459   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:18:21.413482   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:18:21.413616   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:18:21.413786   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:18:21.413936   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:18:21.414049   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:18:21.613274   29917 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ixa66z.cc84lb7qcfu29u7e --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:18:21.613611   29917 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0907 00:18:21.613668   29917 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:18:21.614073   29917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:18:21.614123   29917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:18:21.628320   29917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0907 00:18:21.628785   29917 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:18:21.629228   29917 main.go:141] libmachine: Using API Version  1
	I0907 00:18:21.629245   29917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:18:21.629537   29917 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:18:21.629752   29917 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:18:21.629962   29917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-816061-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0907 00:18:21.629992   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:18:21.632940   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:18:21.633365   29917 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:14:12 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:18:21.633390   29917 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:18:21.633600   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:18:21.633767   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:18:21.633917   29917 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:18:21.634038   29917 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:18:21.795114   29917 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0907 00:18:21.858846   29917 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-9qj9n, kube-system/kube-proxy-dlt4x
	I0907 00:18:24.880311   29917 command_runner.go:130] > node/multinode-816061-m03 cordoned
	I0907 00:18:24.880341   29917 command_runner.go:130] > pod "busybox-5bc68d56bd-b9wll" has DeletionTimestamp older than 1 seconds, skipping
	I0907 00:18:24.880349   29917 command_runner.go:130] > node/multinode-816061-m03 drained
	I0907 00:18:24.880373   29917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-816061-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.250386227s)
	I0907 00:18:24.880393   29917 node.go:108] successfully drained node "m03"
	I0907 00:18:24.880748   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:18:24.880973   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:18:24.881234   29917 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0907 00:18:24.881355   29917 round_trippers.go:463] DELETE https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:24.881373   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:24.881390   29917 round_trippers.go:473]     Content-Type: application/json
	I0907 00:18:24.881403   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:24.881418   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:24.896657   29917 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0907 00:18:24.896682   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:24.896693   29917 round_trippers.go:580]     Audit-Id: ff084708-29f3-4aa2-88c8-af54498fedce
	I0907 00:18:24.896701   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:24.896709   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:24.896722   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:24.896734   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:24.896744   29917 round_trippers.go:580]     Content-Length: 171
	I0907 00:18:24.896755   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:24 GMT
	I0907 00:18:24.897027   29917 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-816061-m03","kind":"nodes","uid":"92bc42a5-722e-482e-9f35-19fa4d9a6485"}}
	I0907 00:18:24.897095   29917 node.go:124] successfully deleted node "m03"
	I0907 00:18:24.897111   29917 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0907 00:18:24.897136   29917 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0907 00:18:24.897155   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ixa66z.cc84lb7qcfu29u7e --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-816061-m03"
	I0907 00:18:24.949395   29917 command_runner.go:130] ! W0907 00:18:24.943683    2396 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0907 00:18:24.949898   29917 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0907 00:18:25.085839   29917 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0907 00:18:25.085863   29917 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0907 00:18:25.848557   29917 command_runner.go:130] > [preflight] Running pre-flight checks
	I0907 00:18:25.848586   29917 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0907 00:18:25.848600   29917 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0907 00:18:25.848623   29917 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:18:25.848635   29917 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:18:25.848646   29917 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0907 00:18:25.848657   29917 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0907 00:18:25.848670   29917 command_runner.go:130] > This node has joined the cluster:
	I0907 00:18:25.848684   29917 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0907 00:18:25.848696   29917 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0907 00:18:25.848709   29917 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0907 00:18:25.848740   29917 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0907 00:18:26.098799   29917 start.go:303] JoinCluster complete in 4.6884171s
	I0907 00:18:26.098823   29917 cni.go:84] Creating CNI manager for ""
	I0907 00:18:26.098828   29917 cni.go:136] 3 nodes found, recommending kindnet
	I0907 00:18:26.098884   29917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:18:26.104532   29917 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0907 00:18:26.104553   29917 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0907 00:18:26.104561   29917 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0907 00:18:26.104567   29917 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0907 00:18:26.104573   29917 command_runner.go:130] > Access: 2023-09-07 00:14:12.931697280 +0000
	I0907 00:18:26.104578   29917 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0907 00:18:26.104583   29917 command_runner.go:130] > Change: 2023-09-07 00:14:11.094697280 +0000
	I0907 00:18:26.104596   29917 command_runner.go:130] >  Birth: -
	I0907 00:18:26.104796   29917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 00:18:26.104814   29917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 00:18:26.123224   29917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:18:26.447690   29917 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:18:26.454554   29917 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0907 00:18:26.459228   29917 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0907 00:18:26.477416   29917 command_runner.go:130] > daemonset.apps/kindnet configured
	I0907 00:18:26.480290   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:18:26.480542   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:18:26.480847   29917 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0907 00:18:26.480859   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.480866   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.480876   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.486016   29917 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0907 00:18:26.486044   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.486050   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.486056   29917 round_trippers.go:580]     Content-Length: 291
	I0907 00:18:26.486070   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.486076   29917 round_trippers.go:580]     Audit-Id: d7053044-0579-4d23-b495-1ab4a27a1ed2
	I0907 00:18:26.486084   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.486092   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.486101   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.486127   29917 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"583de68c-e976-43b9-bd36-bcf190acd905","resourceVersion":"900","creationTimestamp":"2023-09-07T00:04:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0907 00:18:26.486211   29917 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-816061" context rescaled to 1 replicas
	I0907 00:18:26.486244   29917 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.153 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0907 00:18:26.488192   29917 out.go:177] * Verifying Kubernetes components...
	I0907 00:18:26.489557   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:18:26.503777   29917 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:18:26.504111   29917 kapi.go:59] client config for multinode-816061: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/profiles/multinode-816061/client.key", CAFile:"/home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:18:26.504376   29917 node_ready.go:35] waiting up to 6m0s for node "multinode-816061-m03" to be "Ready" ...
	I0907 00:18:26.504450   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:26.504462   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.504473   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.504483   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.506933   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.506969   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.506982   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.506998   29917 round_trippers.go:580]     Audit-Id: 4f05ad5d-0756-4815-aa4b-90f090e1aa57
	I0907 00:18:26.507008   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.507014   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.507019   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.507025   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.507179   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"b165774b-cfec-4ec9-a4ae-8ac17a280abf","resourceVersion":"1234","creationTimestamp":"2023-09-07T00:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0907 00:18:26.507452   29917 node_ready.go:49] node "multinode-816061-m03" has status "Ready":"True"
	I0907 00:18:26.507470   29917 node_ready.go:38] duration metric: took 3.078429ms waiting for node "multinode-816061-m03" to be "Ready" ...
	I0907 00:18:26.507479   29917 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:18:26.507550   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I0907 00:18:26.507561   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.507569   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.507579   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.511865   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:18:26.511884   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.511893   29917 round_trippers.go:580]     Audit-Id: b63f5ccd-a7f6-496a-953b-a704666be39d
	I0907 00:18:26.511901   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.511911   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.511924   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.511934   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.511944   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.513339   29917 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1238"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82082 chars]
	I0907 00:18:26.516969   29917 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.517065   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8ktxh
	I0907 00:18:26.517076   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.517088   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.517102   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.520220   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:26.520238   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.520248   29917 round_trippers.go:580]     Audit-Id: 42bb6cfa-52b1-4c75-bd0b-99575787de74
	I0907 00:18:26.520258   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.520271   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.520284   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.520297   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.520310   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.520562   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8ktxh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c2574ba0-f19a-40c1-a06f-601bb17661f6","resourceVersion":"887","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b19f2b3b-fb45-402f-a9e6-36fca9680639","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b19f2b3b-fb45-402f-a9e6-36fca9680639\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0907 00:18:26.520965   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:26.521029   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.521052   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.521066   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.523998   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.524014   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.524024   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.524031   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.524040   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.524048   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.524058   29917 round_trippers.go:580]     Audit-Id: 99f31631-e382-460c-ab64-163f4f1a0e6c
	I0907 00:18:26.524068   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.524584   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:26.524855   29917 pod_ready.go:92] pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:26.524866   29917 pod_ready.go:81] duration metric: took 7.874324ms waiting for pod "coredns-5dd5756b68-8ktxh" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.524874   29917 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.524921   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-816061
	I0907 00:18:26.524929   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.524936   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.524941   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.527188   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.527206   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.527216   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.527225   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.527233   29917 round_trippers.go:580]     Audit-Id: b384d1ab-92b1-403c-9c7e-63e85b953702
	I0907 00:18:26.527242   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.527253   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.527263   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.527780   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-816061","namespace":"kube-system","uid":"7ff498e1-17ed-4818-befa-68a5a69b96d4","resourceVersion":"910","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.mirror":"98883a05b83cf4cdfaf6946888d8cb74","kubernetes.io/config.seen":"2023-09-07T00:04:04.251712048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0907 00:18:26.528121   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:26.528133   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.528143   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.528152   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.532782   29917 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0907 00:18:26.532801   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.532808   29917 round_trippers.go:580]     Audit-Id: 9a5ad885-256e-4c7f-8490-7ce6db3dfa38
	I0907 00:18:26.532814   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.532819   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.532825   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.532830   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.532835   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.533117   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:26.533490   29917 pod_ready.go:92] pod "etcd-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:26.533506   29917 pod_ready.go:81] duration metric: took 8.623748ms waiting for pod "etcd-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.533528   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.533591   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-816061
	I0907 00:18:26.533602   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.533611   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.533623   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.536226   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.536247   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.536256   29917 round_trippers.go:580]     Audit-Id: 6ade41ed-6794-4f3d-9b8e-b9359973c174
	I0907 00:18:26.536264   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.536272   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.536280   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.536287   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.536298   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.536645   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-816061","namespace":"kube-system","uid":"dbbbc2db-98c3-44e3-a18d-947bad7ffda2","resourceVersion":"880","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.mirror":"17d9280f4f521ce2f8119c5c317f1d67","kubernetes.io/config.seen":"2023-09-07T00:04:04.251716113Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0907 00:18:26.537149   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:26.537163   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.537173   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.537183   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.539200   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.539218   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.539227   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.539236   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.539245   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.539261   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.539269   29917 round_trippers.go:580]     Audit-Id: 651a603e-fb6f-47d7-a779-1017a2be20ee
	I0907 00:18:26.539280   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.539769   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:26.540066   29917 pod_ready.go:92] pod "kube-apiserver-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:26.540079   29917 pod_ready.go:81] duration metric: took 6.541485ms waiting for pod "kube-apiserver-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.540088   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.540159   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-816061
	I0907 00:18:26.540167   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.540175   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.540181   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.542211   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.542224   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.542230   29917 round_trippers.go:580]     Audit-Id: d3425dc8-14d4-4876-b234-5c31d08c36b8
	I0907 00:18:26.542235   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.542240   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.542246   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.542253   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.542263   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.542759   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-816061","namespace":"kube-system","uid":"ea192806-6f42-4471-8e73-ae96aa3bfa06","resourceVersion":"889","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.mirror":"45d88e9a1c94ef1043c5c8795b51d51f","kubernetes.io/config.seen":"2023-09-07T00:04:04.251717776Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0907 00:18:26.543236   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:26.543251   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.543262   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.543272   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.549974   29917 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0907 00:18:26.549998   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.550007   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.550017   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.550026   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.550034   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.550043   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.550051   29917 round_trippers.go:580]     Audit-Id: 970169ec-8c09-464c-8351-6091debc3728
	I0907 00:18:26.550186   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:26.550578   29917 pod_ready.go:92] pod "kube-controller-manager-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:26.550596   29917 pod_ready.go:81] duration metric: took 10.502622ms waiting for pod "kube-controller-manager-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.550608   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.704933   29917 request.go:629] Waited for 154.254999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:18:26.705002   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wswp
	I0907 00:18:26.705011   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.705021   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.705032   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.707671   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:26.707691   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.707698   29917 round_trippers.go:580]     Audit-Id: 58b2404f-5c51-48bc-a8e1-e0c26b584b93
	I0907 00:18:26.707703   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.707709   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.707714   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.707719   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.707725   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.707875   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2wswp","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d99412b-fc2d-4fce-a7e2-80da3e220e07","resourceVersion":"1034","creationTimestamp":"2023-09-07T00:05:09Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0907 00:18:26.904607   29917 request.go:629] Waited for 196.325541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:18:26.904686   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m02
	I0907 00:18:26.904693   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:26.904703   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:26.904712   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:26.907850   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:26.907869   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:26.907879   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:26.907888   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:26.907896   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:26.907903   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:26.907911   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:26 GMT
	I0907 00:18:26.907919   29917 round_trippers.go:580]     Audit-Id: 0ad21e6a-340d-4313-9003-7b6e8c7881b4
	I0907 00:18:26.908065   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m02","uid":"15a4f37e-37a6-46f1-a8e3-c2ab0e788ddf","resourceVersion":"1059","creationTimestamp":"2023-09-07T00:16:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:16:43Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0907 00:18:26.908404   29917 pod_ready.go:92] pod "kube-proxy-2wswp" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:26.908421   29917 pod_ready.go:81] duration metric: took 357.80594ms waiting for pod "kube-proxy-2wswp" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:26.908434   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:27.104842   29917 request.go:629] Waited for 196.350563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:18:27.104910   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:18:27.104931   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:27.104948   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:27.104959   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:27.108095   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:27.108118   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:27.108125   29917 round_trippers.go:580]     Audit-Id: b38521ea-fd6d-4cac-8bc5-81e1847b786c
	I0907 00:18:27.108131   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:27.108137   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:27.108142   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:27.108147   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:27.108153   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:27 GMT
	I0907 00:18:27.108609   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"1188","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0907 00:18:27.305118   29917 request.go:629] Waited for 196.089476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:27.305181   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:27.305188   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:27.305199   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:27.305208   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:27.307978   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:27.308002   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:27.308012   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:27.308021   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:27.308032   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:27.308050   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:27 GMT
	I0907 00:18:27.308059   29917 round_trippers.go:580]     Audit-Id: 629371ea-4842-491c-8b75-41a83f2acb7c
	I0907 00:18:27.308067   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:27.308186   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"b165774b-cfec-4ec9-a4ae-8ac17a280abf","resourceVersion":"1234","creationTimestamp":"2023-09-07T00:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0907 00:18:27.505355   29917 request.go:629] Waited for 196.731598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:18:27.505418   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:18:27.505423   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:27.505441   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:27.505451   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:27.509161   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:27.509182   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:27.509189   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:27.509195   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:27 GMT
	I0907 00:18:27.509203   29917 round_trippers.go:580]     Audit-Id: cead4924-ae9f-4a44-a17b-053f6933302c
	I0907 00:18:27.509212   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:27.509220   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:27.509233   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:27.509679   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"1188","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0907 00:18:27.705428   29917 request.go:629] Waited for 195.35728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:27.705497   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:27.705503   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:27.705513   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:27.705522   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:27.708807   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:27.708828   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:27.708835   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:27 GMT
	I0907 00:18:27.708840   29917 round_trippers.go:580]     Audit-Id: 30f775cf-c208-4780-a486-21b9371c6089
	I0907 00:18:27.708846   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:27.708851   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:27.708856   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:27.708862   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:27.708997   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"b165774b-cfec-4ec9-a4ae-8ac17a280abf","resourceVersion":"1234","creationTimestamp":"2023-09-07T00:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0907 00:18:28.210059   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dlt4x
	I0907 00:18:28.210082   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.210091   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.210097   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.213304   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:28.213327   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.213334   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.213340   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.213345   29917 round_trippers.go:580]     Audit-Id: 45fe69b6-5fd9-4cfb-8899-eb34fbd4d012
	I0907 00:18:28.213351   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.213356   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.213361   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.213974   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dlt4x","generateName":"kube-proxy-","namespace":"kube-system","uid":"2c56690f-de33-49ec-8cad-79fdae731daa","resourceVersion":"1252","creationTimestamp":"2023-09-07T00:06:03Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:06:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0907 00:18:28.214361   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061-m03
	I0907 00:18:28.214372   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.214379   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.214385   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.217158   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:28.217175   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.217182   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.217189   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.217196   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.217202   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.217208   29917 round_trippers.go:580]     Audit-Id: 8fb2066f-5d85-4eff-85ad-0be29375f7cc
	I0907 00:18:28.217213   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.217652   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061-m03","uid":"b165774b-cfec-4ec9-a4ae-8ac17a280abf","resourceVersion":"1234","creationTimestamp":"2023-09-07T00:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:18:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0907 00:18:28.217925   29917 pod_ready.go:92] pod "kube-proxy-dlt4x" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:28.217941   29917 pod_ready.go:81] duration metric: took 1.309498916s waiting for pod "kube-proxy-dlt4x" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:28.217950   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:28.304525   29917 request.go:629] Waited for 86.517471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:18:28.304606   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbzlv
	I0907 00:18:28.304613   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.304625   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.304635   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.307348   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:28.307372   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.307383   29917 round_trippers.go:580]     Audit-Id: 144826f5-d01f-477a-8310-66eaaf5e0b9e
	I0907 00:18:28.307392   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.307402   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.307410   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.307417   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.307423   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.307627   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbzlv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6b9717d8-174b-4713-a941-382c81cc659e","resourceVersion":"846","creationTimestamp":"2023-09-07T00:04:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"38ad0197-eed5-4242-865b-16e31bc8e6a3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38ad0197-eed5-4242-865b-16e31bc8e6a3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0907 00:18:28.505472   29917 request.go:629] Waited for 197.390657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:28.505559   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:28.505567   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.505580   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.505591   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.508825   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:28.508851   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.508861   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.508869   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.508877   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.508886   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.508894   29917 round_trippers.go:580]     Audit-Id: 7c9b2f34-3486-4151-a294-eab9a34a4366
	I0907 00:18:28.508908   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.509413   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:28.509796   29917 pod_ready.go:92] pod "kube-proxy-tbzlv" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:28.509812   29917 pod_ready.go:81] duration metric: took 291.854899ms waiting for pod "kube-proxy-tbzlv" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:28.509824   29917 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:28.705306   29917 request.go:629] Waited for 195.416219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:18:28.705391   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-816061
	I0907 00:18:28.705400   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.705411   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.705424   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.707803   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:28.707828   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.707839   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.707853   29917 round_trippers.go:580]     Audit-Id: 83cc5634-f7e1-4a12-8225-ad31c2856a63
	I0907 00:18:28.707864   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.707878   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.707890   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.707903   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.708093   29917 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-816061","namespace":"kube-system","uid":"3fa4fad1-c309-42a9-af5f-28e6398492c7","resourceVersion":"881","creationTimestamp":"2023-09-07T00:04:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.mirror":"ac3fb26098ffac0d0e40ebb845f9b9fe","kubernetes.io/config.seen":"2023-09-07T00:04:04.251718754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-07T00:04:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0907 00:18:28.904891   29917 request.go:629] Waited for 196.352061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:28.904972   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-816061
	I0907 00:18:28.904979   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:28.904995   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:28.905007   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:28.908539   29917 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0907 00:18:28.908565   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:28.908576   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:28.908584   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:28.908592   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:28.908600   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:28 GMT
	I0907 00:18:28.908608   29917 round_trippers.go:580]     Audit-Id: afde91bf-0960-411c-a743-8f78843643d9
	I0907 00:18:28.908617   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:28.908961   29917 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-07T00:04:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0907 00:18:28.909333   29917 pod_ready.go:92] pod "kube-scheduler-multinode-816061" in "kube-system" namespace has status "Ready":"True"
	I0907 00:18:28.909350   29917 pod_ready.go:81] duration metric: took 399.518176ms waiting for pod "kube-scheduler-multinode-816061" in "kube-system" namespace to be "Ready" ...
	I0907 00:18:28.909360   29917 pod_ready.go:38] duration metric: took 2.401871328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:18:28.909371   29917 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:18:28.909413   29917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:18:28.922155   29917 system_svc.go:56] duration metric: took 12.775554ms WaitForService to wait for kubelet.
	I0907 00:18:28.922183   29917 kubeadm.go:581] duration metric: took 2.435907835s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:18:28.922205   29917 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:18:29.104858   29917 request.go:629] Waited for 182.592363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I0907 00:18:29.104914   29917 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I0907 00:18:29.104919   29917 round_trippers.go:469] Request Headers:
	I0907 00:18:29.104927   29917 round_trippers.go:473]     Accept: application/json, */*
	I0907 00:18:29.104933   29917 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0907 00:18:29.107718   29917 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0907 00:18:29.107743   29917 round_trippers.go:577] Response Headers:
	I0907 00:18:29.107753   29917 round_trippers.go:580]     Audit-Id: 9dfeb817-abad-4ac6-a7eb-c08c25b68805
	I0907 00:18:29.107762   29917 round_trippers.go:580]     Cache-Control: no-cache, private
	I0907 00:18:29.107771   29917 round_trippers.go:580]     Content-Type: application/json
	I0907 00:18:29.107780   29917 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d255f08f-b8e4-4876-9a1d-53dd04c524d2
	I0907 00:18:29.107790   29917 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e4f8de6-6951-4dd9-a312-8943b55077ad
	I0907 00:18:29.107798   29917 round_trippers.go:580]     Date: Thu, 07 Sep 2023 00:18:29 GMT
	I0907 00:18:29.108069   29917 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1254"},"items":[{"metadata":{"name":"multinode-816061","uid":"4539147e-b4c9-4fa8-bd54-50a9dcd3c660","resourceVersion":"920","creationTimestamp":"2023-09-07T00:04:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-816061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cf47a38f14700a28a638c18f21764b75f0a296b2","minikube.k8s.io/name":"multinode-816061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_07T00_04_05_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15134 chars]
	I0907 00:18:29.108859   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:18:29.108883   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:18:29.108892   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:18:29.108896   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:18:29.108900   29917 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:18:29.108904   29917 node_conditions.go:123] node cpu capacity is 2
	I0907 00:18:29.108908   29917 node_conditions.go:105] duration metric: took 186.698635ms to run NodePressure ...
	I0907 00:18:29.108924   29917 start.go:228] waiting for startup goroutines ...
	I0907 00:18:29.108941   29917 start.go:242] writing updated cluster config ...
	I0907 00:18:29.109264   29917 ssh_runner.go:195] Run: rm -f paused
	I0907 00:18:29.158664   29917 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:18:29.160844   29917 out.go:177] * Done! kubectl is now configured to use "multinode-816061" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:14:11 UTC, ends at Thu 2023-09-07 00:18:30 UTC. --
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221225301Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221310730Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221355735Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221398505Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221526467Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221574374Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"" file="storage/storage_transport.go:185"
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.221695805Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774 registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2],Size_:126972880,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830 registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195],Size_:123163446,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:b462ce0c8b1ff16d4
66c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4 registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e],Size_:61477686,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,RepoTags:[registry.k8s.io/kube-proxy:v1.28.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3 registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c],Size_:74680215,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001
e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[g
cr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,RepoTags:[docker.io/kindest/kindnetd:v20230511-dc714da8],RepoDigests:[docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9],Size_:65249302,Uid:nil,Username:,Spec:nil,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spe
c:nil,},},}" file="go-grpc-middleware/chain.go:25" id=6e407747-2565-4a4f-a02b-feec0d5afefa name=/runtime.v1.ImageService/ListImages
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.244972141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a84d18bc-7c57-47b8-a513-1ba866afc677 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.245035063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a84d18bc-7c57-47b8-a513-1ba866afc677 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.245336929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a84d18bc-7c57-47b8-a513-1ba866afc677 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.281956741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6146f6e1-c965-42f8-874b-9e7b80536654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.282021356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6146f6e1-c965-42f8-874b-9e7b80536654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.282240069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6146f6e1-c965-42f8-874b-9e7b80536654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.319549827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d029028b-56df-49f2-a1fa-87387e758a37 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.319612781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d029028b-56df-49f2-a1fa-87387e758a37 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.319873843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d029028b-56df-49f2-a1fa-87387e758a37 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.352842090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f8dba0b-1f4d-4073-8a0c-343f60acf627 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.352905561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f8dba0b-1f4d-4073-8a0c-343f60acf627 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.357945502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f8dba0b-1f4d-4073-8a0c-343f60acf627 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.394842139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f388665-c2b6-4984-a869-6a146e611cd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.394906942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f388665-c2b6-4984-a869-6a146e611cd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.395107275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f388665-c2b6-4984-a869-6a146e611cd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.431526348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4eb1624b-cfa3-45a0-bab3-37bf8134a362 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.431588571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4eb1624b-cfa3-45a0-bab3-37bf8134a362 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:18:30 multinode-816061 crio[710]: time="2023-09-07 00:18:30.431898694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a85faaf2cced8212ce07c96f3904835cfc4ed5618d5ddb78b5cf01d371e88b,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694045718877599448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7939193ce267e7678f12ae8e793b72b67170ad3947f490e3ab2fc8f318d5f3,PodSandboxId:250495ff085710e50805cc8f007893cdc992462f5db18aa61bc5e8da23dca746,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694045698377269982,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-zvzjl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 346dd02e-d6b2-481f-837e-45b618a3fd04,},Annotations:map[string]string{io.kubernetes.container.hash: 493a506f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18,PodSandboxId:99fda056a72163e93be8b7b2cc2eed4c5148c2e256f9f165d3b209e1f09771c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694045695170242049,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8ktxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2574ba0-f19a-40c1-a06f-601bb17661f6,},Annotations:map[string]string{io.kubernetes.container.hash: b85793d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9,PodSandboxId:b36d8cc78332042d2b6b82dafd33251670863722f985f9463de9a973382169e1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694045690049505999,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xgbtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 137c032b-12d1-4179-8416-0f3cc5733842,},Annotations:map[string]string{io.kubernetes.container.hash: 21f63df9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0,PodSandboxId:129d9ff1a4e9a3d418e01b3599e8db108e81ae75b74a2dfab07bc0adcd0dc6a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694045687703935897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbzlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9717d8-174b-4713-a941-382c81cc
659e,},Annotations:map[string]string{io.kubernetes.container.hash: d4a64c8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04,PodSandboxId:93938166de37c1b332d98781d66341f4e41931342f3b7aa4ea3c27c6bd4ab806,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694045687674614383,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce467f7-aaa1-4391-9bc9-39ef0521eb
d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0d3ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8,PodSandboxId:2c2ca442ae2b83cc80f930d756c17cce0709ede77d8c9bdc4962d906f010023d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694045681280243364,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3fb26098ffac0d0e40ebb845f9b9fe,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb,PodSandboxId:2529bcaf5599138eb19033acd2e3a32cc32d5d9c4278020eda0b6b4b065796c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694045681018492942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98883a05b83cf4cdfaf6946888d8cb74,},Annotations:map[string]string{io.kubernetes.container.hash:
8a4344,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4,PodSandboxId:eb7334882594eb9cdc68a363a53f6360ce2bf44c74a88251e6c49a7a68123702,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694045680780876045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17d9280f4f521ce2f8119c5c317f1d67,},Annotations:map[string]string{io.kubernetes.container.hash: 19eff46,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df,PodSandboxId:d7e3d83705d76bd8132db27affc6b41e8d9761abdd58ffeb3fbc02ab897b66e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694045680721825876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-816061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d88e9a1c94ef1043c5c8795b51d51f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4eb1624b-cfa3-45a0-bab3-37bf8134a362 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	d5a85faaf2cce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   93938166de37c
	cc7939193ce26       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   250495ff08571
	5748def305e96       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   99fda056a7216
	2a53b61eeb1bc       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   b36d8cc783320
	3eb402a4e6bd9       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      3 minutes ago       Running             kube-proxy                1                   129d9ff1a4e9a
	b23ea6f85335a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   93938166de37c
	9056fa044b027       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      3 minutes ago       Running             kube-scheduler            1                   2c2ca442ae2b8
	53870d7ad3e2e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   2529bcaf55991
	7f9f59a7eb709       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      3 minutes ago       Running             kube-apiserver            1                   eb7334882594e
	7288a2fcb2230       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      3 minutes ago       Running             kube-controller-manager   1                   d7e3d83705d76
	
	* 
	* ==> coredns [5748def305e96ad7eba4759139a1cf23e490912e94cd1f5d15135f265ff30c18] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50978 - 15113 "HINFO IN 4333951560831796367.6999227232088738993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027263591s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-816061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-816061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=multinode-816061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_04_05_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:04:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-816061
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:18:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:15:17 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:15:17 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:15:17 +0000   Thu, 07 Sep 2023 00:03:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:15:17 +0000   Thu, 07 Sep 2023 00:14:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-816061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 73622b4a66c04eabb97791231e099de8
	  System UUID:                73622b4a-66c0-4eab-b977-91231e099de8
	  Boot ID:                    1e2e6beb-64b9-4e0a-b802-21a8040a1af9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zvzjl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-8ktxh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-816061                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-xgbtc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-816061             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-816061    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tbzlv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-816061             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-816061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-816061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-816061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-816061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-816061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-816061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-816061 event: Registered Node multinode-816061 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-816061 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-816061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-816061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-816061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-816061 event: Registered Node multinode-816061 in Controller
	
	
	Name:               multinode-816061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-816061-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:16:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-816061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:18:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:16:43 +0000   Thu, 07 Sep 2023 00:16:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:16:43 +0000   Thu, 07 Sep 2023 00:16:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:16:43 +0000   Thu, 07 Sep 2023 00:16:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:16:43 +0000   Thu, 07 Sep 2023 00:16:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-816061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 81777933e8a54565a8bde95c976c63f7
	  System UUID:                81777933-e8a5-4565-a8bd-e95c976c63f7
	  Boot ID:                    b2a11f07-fbb6-42b6-8be4-e19a5c1ebaed
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5g2pg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-gdck2               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2wswp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 109s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-816061-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-816061-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-816061-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-816061-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                  kubelet     Node multinode-816061-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m22s (x2 over 3m22s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 107s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  107s                   kubelet     Node multinode-816061-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s                   kubelet     Node multinode-816061-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s                   kubelet     Node multinode-816061-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  107s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                107s                   kubelet     Node multinode-816061-m02 status is now: NodeReady
	
	
	Name:               multinode-816061-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-816061-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:18:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-816061-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:18:25 +0000   Thu, 07 Sep 2023 00:18:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:18:25 +0000   Thu, 07 Sep 2023 00:18:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:18:25 +0000   Thu, 07 Sep 2023 00:18:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:18:25 +0000   Thu, 07 Sep 2023 00:18:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.153
	  Hostname:    multinode-816061-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3282cde6fb794883be35d7f8aad79434
	  System UUID:                3282cde6-fb79-4883-be35-d7f8aad79434
	  Boot ID:                    d8cc52af-efab-4f9e-af55-568ac0eb7681
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-b9wll    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kindnet-9qj9n               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-dlt4x            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-816061-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-816061-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-816061-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             71s                 kubelet     Node multinode-816061-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        42s (x2 over 102s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                10s (x2 over 11m)   kubelet     Node multinode-816061-m03 status is now: NodeReady
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s                  kubelet     Node multinode-816061-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s                  kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-816061-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s                  kubelet     Node multinode-816061-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070622] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.302538] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.375554] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130643] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.418108] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.050620] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.106751] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.142596] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.108952] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.210307] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +16.767075] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [53870d7ad3e2e35242e77e173c542957b76764f350f3c0068c1437c6cfcb88fb] <==
	* {"level":"info","ts":"2023-09-07T00:14:43.070793Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:14:43.0709Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-07T00:14:43.071135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f switched to configuration voters=(17211001333175699727)"}
	{"level":"info","ts":"2023-09-07T00:14:43.071209Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","added-peer-id":"eed9c28654b6490f","added-peer-peer-urls":["https://192.168.39.212:2380"]}
	{"level":"info","ts":"2023-09-07T00:14:43.071323Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:14:43.071368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:14:43.076551Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-07T00:14:43.076653Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-09-07T00:14:43.076847Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-09-07T00:14:43.077187Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-07T00:14:43.077422Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-07T00:14:44.750577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-07T00:14:44.750644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:14:44.750693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgPreVoteResp from eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-09-07T00:14:44.75077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:14:44.750779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgVoteResp from eed9c28654b6490f at term 3"}
	{"level":"info","ts":"2023-09-07T00:14:44.750787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became leader at term 3"}
	{"level":"info","ts":"2023-09-07T00:14:44.750794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eed9c28654b6490f elected leader eed9c28654b6490f at term 3"}
	{"level":"info","ts":"2023-09-07T00:14:44.753421Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eed9c28654b6490f","local-member-attributes":"{Name:multinode-816061 ClientURLs:[https://192.168.39.212:2379]}","request-path":"/0/members/eed9c28654b6490f/attributes","cluster-id":"f8d3b95e5bbb719c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:14:44.753448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:14:44.753892Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:14:44.754406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:14:44.755405Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.212:2379"}
	{"level":"info","ts":"2023-09-07T00:14:44.755836Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:14:44.75588Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:18:30 up 4 min,  0 users,  load average: 0.45, 0.36, 0.16
	Linux multinode-816061 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2a53b61eeb1bc3b92052ab22016fc8d00b1bf1a8d120b7e4bcc5d0db3aa92eb9] <==
	* I0907 00:17:41.470275       1 main.go:250] Node multinode-816061-m03 has CIDR [10.244.3.0/24] 
	I0907 00:17:51.476039       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:17:51.476164       1 main.go:227] handling current node
	I0907 00:17:51.476204       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:17:51.476212       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	I0907 00:17:51.476316       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0907 00:17:51.476349       1 main.go:250] Node multinode-816061-m03 has CIDR [10.244.3.0/24] 
	I0907 00:18:01.482357       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:18:01.482444       1 main.go:227] handling current node
	I0907 00:18:01.482468       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:18:01.482630       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	I0907 00:18:01.482939       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0907 00:18:01.482985       1 main.go:250] Node multinode-816061-m03 has CIDR [10.244.3.0/24] 
	I0907 00:18:11.496784       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:18:11.496947       1 main.go:227] handling current node
	I0907 00:18:11.496974       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:18:11.496993       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	I0907 00:18:11.497096       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0907 00:18:11.497114       1 main.go:250] Node multinode-816061-m03 has CIDR [10.244.3.0/24] 
	I0907 00:18:21.504690       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I0907 00:18:21.504973       1 main.go:227] handling current node
	I0907 00:18:21.505017       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0907 00:18:21.505037       1 main.go:250] Node multinode-816061-m02 has CIDR [10.244.1.0/24] 
	I0907 00:18:21.505178       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0907 00:18:21.505199       1 main.go:250] Node multinode-816061-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [7f9f59a7eb7091598e154172465c95eed70673496286d78a5e0d0c831cd375b4] <==
	* I0907 00:14:46.152027       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0907 00:14:46.155296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0907 00:14:46.155432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0907 00:14:46.156333       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0907 00:14:46.156376       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0907 00:14:46.191347       1 shared_informer.go:318] Caches are synced for configmaps
	I0907 00:14:46.191359       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0907 00:14:46.191497       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0907 00:14:46.205992       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0907 00:14:46.256506       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0907 00:14:46.258166       1 aggregator.go:166] initial CRD sync complete...
	I0907 00:14:46.258224       1 autoregister_controller.go:141] Starting autoregister controller
	I0907 00:14:46.258248       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0907 00:14:46.258272       1 cache.go:39] Caches are synced for autoregister controller
	I0907 00:14:46.258918       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0907 00:14:46.293585       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0907 00:14:46.294525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0907 00:14:46.294558       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0907 00:14:47.092913       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0907 00:14:48.955319       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0907 00:14:49.121313       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0907 00:14:49.140816       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0907 00:14:49.260840       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0907 00:14:49.279454       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0907 00:15:36.430418       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [7288a2fcb2230bb65d544854adb005ed5061a9798cf4b38c058895ca2998e1df] <==
	* I0907 00:16:43.202411       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-816061-m02" podCIDRs=["10.244.1.0/24"]
	I0907 00:16:43.285336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.772159ms"
	I0907 00:16:43.285450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.33µs"
	I0907 00:16:43.319206       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:16:44.084342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.048µs"
	I0907 00:16:55.350696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.838µs"
	I0907 00:16:55.951073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.823µs"
	I0907 00:16:55.954321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.288µs"
	I0907 00:17:19.428770       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:18:20.326096       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:18:20.821633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="253.207µs"
	I0907 00:18:21.881338       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5g2pg"
	I0907 00:18:21.894538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.895606ms"
	I0907 00:18:21.907389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.785836ms"
	I0907 00:18:21.925211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.756144ms"
	I0907 00:18:21.925412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="111.783µs"
	I0907 00:18:23.231996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.335893ms"
	I0907 00:18:23.232092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.932µs"
	I0907 00:18:23.761643       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-b9wll" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-b9wll"
	I0907 00:18:24.893023       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:18:25.535212       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:18:25.535350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-816061-m03\" does not exist"
	I0907 00:18:25.548397       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-816061-m03" podCIDRs=["10.244.2.0/24"]
	I0907 00:18:25.687383       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-816061-m02"
	I0907 00:18:26.444185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="103.096µs"
	
	* 
	* ==> kube-proxy [3eb402a4e6bd99f015be6dc75906a5b536f55ac1cc4894fb2b1bda29198407c0] <==
	* I0907 00:14:48.039008       1 server_others.go:69] "Using iptables proxy"
	I0907 00:14:48.126336       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I0907 00:14:48.351529       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:14:48.351757       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:14:48.389537       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:14:48.389791       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:14:48.390382       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:14:48.391828       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:14:48.392678       1 config.go:188] "Starting service config controller"
	I0907 00:14:48.401687       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:14:48.416319       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:14:48.413682       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:14:48.416480       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:14:48.414253       1 config.go:315] "Starting node config controller"
	I0907 00:14:48.416689       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:14:48.516812       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0907 00:14:48.516900       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [9056fa044b027ab70a857dd67025132481f00dd9c151873834271f563e09c0f8] <==
	* I0907 00:14:43.398627       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:14:46.195195       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:14:46.195476       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:14:46.195607       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:14:46.195636       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:14:46.232664       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:14:46.232873       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:14:46.236347       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:14:46.236476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:14:46.239942       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:14:46.240107       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:14:46.337261       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:14:11 UTC, ends at Thu 2023-09-07 00:18:31 UTC. --
	Sep 07 00:14:48 multinode-816061 kubelet[915]: E0907 00:14:48.320353     915 projected.go:198] Error preparing data for projected volume kube-api-access-2f45l for pod default/busybox-5bc68d56bd-zvzjl: object "default"/"kube-root-ca.crt" not registered
	Sep 07 00:14:48 multinode-816061 kubelet[915]: E0907 00:14:48.320424     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/346dd02e-d6b2-481f-837e-45b618a3fd04-kube-api-access-2f45l podName:346dd02e-d6b2-481f-837e-45b618a3fd04 nodeName:}" failed. No retries permitted until 2023-09-07 00:14:50.320403333 +0000 UTC m=+10.912399198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2f45l" (UniqueName: "kubernetes.io/projected/346dd02e-d6b2-481f-837e-45b618a3fd04-kube-api-access-2f45l") pod "busybox-5bc68d56bd-zvzjl" (UID: "346dd02e-d6b2-481f-837e-45b618a3fd04") : object "default"/"kube-root-ca.crt" not registered
	Sep 07 00:14:48 multinode-816061 kubelet[915]: E0907 00:14:48.669680     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-zvzjl" podUID="346dd02e-d6b2-481f-837e-45b618a3fd04"
	Sep 07 00:14:48 multinode-816061 kubelet[915]: E0907 00:14:48.669827     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-8ktxh" podUID="c2574ba0-f19a-40c1-a06f-601bb17661f6"
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.235136     915 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.235211     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2574ba0-f19a-40c1-a06f-601bb17661f6-config-volume podName:c2574ba0-f19a-40c1-a06f-601bb17661f6 nodeName:}" failed. No retries permitted until 2023-09-07 00:14:54.235196658 +0000 UTC m=+14.827192509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c2574ba0-f19a-40c1-a06f-601bb17661f6-config-volume") pod "coredns-5dd5756b68-8ktxh" (UID: "c2574ba0-f19a-40c1-a06f-601bb17661f6") : object "kube-system"/"coredns" not registered
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.335935     915 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.335987     915 projected.go:198] Error preparing data for projected volume kube-api-access-2f45l for pod default/busybox-5bc68d56bd-zvzjl: object "default"/"kube-root-ca.crt" not registered
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.336065     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/346dd02e-d6b2-481f-837e-45b618a3fd04-kube-api-access-2f45l podName:346dd02e-d6b2-481f-837e-45b618a3fd04 nodeName:}" failed. No retries permitted until 2023-09-07 00:14:54.336049475 +0000 UTC m=+14.928045330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2f45l" (UniqueName: "kubernetes.io/projected/346dd02e-d6b2-481f-837e-45b618a3fd04-kube-api-access-2f45l") pod "busybox-5bc68d56bd-zvzjl" (UID: "346dd02e-d6b2-481f-837e-45b618a3fd04") : object "default"/"kube-root-ca.crt" not registered
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.669664     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-zvzjl" podUID="346dd02e-d6b2-481f-837e-45b618a3fd04"
	Sep 07 00:14:50 multinode-816061 kubelet[915]: E0907 00:14:50.669867     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-8ktxh" podUID="c2574ba0-f19a-40c1-a06f-601bb17661f6"
	Sep 07 00:14:51 multinode-816061 kubelet[915]: I0907 00:14:51.340984     915 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 07 00:15:18 multinode-816061 kubelet[915]: I0907 00:15:18.857216     915 scope.go:117] "RemoveContainer" containerID="b23ea6f85335ac3c85af8e676023f51e8c0b0d48bf72277916dbc1d8a5e1ad04"
	Sep 07 00:15:39 multinode-816061 kubelet[915]: E0907 00:15:39.688996     915 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 00:15:39 multinode-816061 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 00:15:39 multinode-816061 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 00:15:39 multinode-816061 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 00:16:39 multinode-816061 kubelet[915]: E0907 00:16:39.688192     915 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 00:16:39 multinode-816061 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 00:16:39 multinode-816061 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 00:16:39 multinode-816061 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 00:17:39 multinode-816061 kubelet[915]: E0907 00:17:39.691169     915 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 00:17:39 multinode-816061 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 00:17:39 multinode-816061 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 00:17:39 multinode-816061 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-816061 -n multinode-816061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-816061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (691.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 stop
E0907 00:19:02.118093   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-816061 stop: exit status 82 (2m1.021293162s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-816061"  ...
	* Stopping node "multinode-816061"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-816061 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-816061 status: exit status 3 (18.761536537s)

                                                
                                                
-- stdout --
	multinode-816061
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-816061-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:20:53.447109   32735 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	E0907 00:20:53.447151   32735 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-816061 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-816061 -n multinode-816061
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-816061 -n multinode-816061: exit status 3 (3.156726361s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:20:56.775126   32827 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	E0907 00:20:56.775153   32827 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-816061" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.94s)

                                                
                                    
x
+
TestPreload (292.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-083708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0907 00:29:27.894012   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:31:17.593822   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:31:24.846423   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-083708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m28.605352667s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-083708 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-083708 image pull gcr.io/k8s-minikube/busybox: (2.79175188s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-083708
E0907 00:32:05.163112   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-083708: exit status 82 (2m1.672830488s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-083708"  ...
	* Stopping node "test-preload-083708"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-083708 failed: exit status 82
panic.go:522: *** TestPreload FAILED at 2023-09-07 00:33:47.767350353 +0000 UTC m=+3366.488805892
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-083708 -n test-preload-083708
E0907 00:34:02.117505   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-083708 -n test-preload-083708: exit status 3 (18.514407471s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:34:06.279111   35861 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E0907 00:34:06.279134   35861 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-083708" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-083708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-083708
--- FAIL: TestPreload (292.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (152.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.3977229122.exe start -p running-upgrade-395302 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0907 00:36:17.593485   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:36:24.848433   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.3977229122.exe start -p running-upgrade-395302 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m24.370854082s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-395302 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-395302 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.666938051s)

                                                
                                                
-- stdout --
	* [running-upgrade-395302] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-395302 in cluster running-upgrade-395302
	* Updating the running kvm2 "running-upgrade-395302" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:38:30.131421   38715 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:38:30.131629   38715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:38:30.131651   38715 out.go:309] Setting ErrFile to fd 2...
	I0907 00:38:30.131666   38715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:38:30.131986   38715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:38:30.132711   38715 out.go:303] Setting JSON to false
	I0907 00:38:30.134038   38715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4854,"bootTime":1694042256,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:38:30.134138   38715 start.go:138] virtualization: kvm guest
	I0907 00:38:30.136624   38715 out.go:177] * [running-upgrade-395302] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:38:30.138155   38715 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:38:30.138206   38715 notify.go:220] Checking for updates...
	I0907 00:38:30.139521   38715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:38:30.141036   38715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:38:30.142475   38715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:38:30.143888   38715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:38:30.145290   38715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:38:30.147164   38715 config.go:182] Loaded profile config "running-upgrade-395302": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0907 00:38:30.147187   38715 start_flags.go:686] config upgrade: Driver=kvm2
	I0907 00:38:30.147199   38715 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0907 00:38:30.147332   38715 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/running-upgrade-395302/config.json ...
	I0907 00:38:30.148093   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:38:30.148138   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:38:30.166647   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0907 00:38:30.167212   38715 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:38:30.167857   38715 main.go:141] libmachine: Using API Version  1
	I0907 00:38:30.167890   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:38:30.168306   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:38:30.168486   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:30.169829   38715 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0907 00:38:30.171128   38715 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:38:30.171594   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:38:30.171668   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:38:30.190116   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43067
	I0907 00:38:30.190607   38715 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:38:30.191328   38715 main.go:141] libmachine: Using API Version  1
	I0907 00:38:30.191358   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:38:30.191742   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:38:30.191960   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:30.231282   38715 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:38:30.232826   38715 start.go:298] selected driver: kvm2
	I0907 00:38:30.232845   38715 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-395302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.169 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0907 00:38:30.232999   38715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:38:30.233956   38715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.234042   38715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:38:30.254414   38715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:38:30.254919   38715 cni.go:84] Creating CNI manager for ""
	I0907 00:38:30.254945   38715 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0907 00:38:30.254953   38715 start_flags.go:321] config:
	{Name:running-upgrade-395302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.169 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0907 00:38:30.255184   38715 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.257091   38715 out.go:177] * Starting control plane node running-upgrade-395302 in cluster running-upgrade-395302
	I0907 00:38:30.258357   38715 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0907 00:38:30.707066   38715 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0907 00:38:30.707223   38715 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/running-upgrade-395302/config.json ...
	I0907 00:38:30.707369   38715 cache.go:107] acquiring lock: {Name:mk26f05d7c4624705d894605a55d55faf900f80e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707419   38715 cache.go:107] acquiring lock: {Name:mk09936f8ca333a4f4eed016557aac6597ad6ba7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707466   38715 cache.go:107] acquiring lock: {Name:mk771fb3339fe97a4385bee215495cef98959127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707453   38715 cache.go:107] acquiring lock: {Name:mk3dee7e5f6eceeab2f3e6acc96e2842cd7cabe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707501   38715 cache.go:107] acquiring lock: {Name:mk777a2c6c8af6f8c4f579806b6f1802d6d0d780 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707540   38715 start.go:365] acquiring machines lock for running-upgrade-395302: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:38:30.707560   38715 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0907 00:38:30.707608   38715 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0907 00:38:30.707621   38715 start.go:369] acquired machines lock for "running-upgrade-395302" in 62.592µs
	I0907 00:38:30.707639   38715 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:38:30.707645   38715 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0907 00:38:30.707649   38715 fix.go:54] fixHost starting: minikube
	I0907 00:38:30.707382   38715 cache.go:107] acquiring lock: {Name:mkd60f16278a3e2c71e588d7ee3a4c6470160b75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707767   38715 cache.go:107] acquiring lock: {Name:mkb1c77274cfa9b3493e4a1fd02e6a2650efe360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707768   38715 cache.go:107] acquiring lock: {Name:mk8a0d25472c2300613db21a0ebf2c980b39f32a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:30.707485   38715 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0907 00:38:30.707803   38715 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 445.014µs
	I0907 00:38:30.707820   38715 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0907 00:38:30.707788   38715 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0907 00:38:30.707836   38715 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:38:30.707863   38715 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0907 00:38:30.707897   38715 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0907 00:38:30.708070   38715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:38:30.708111   38715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:38:30.709102   38715 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0907 00:38:30.709162   38715 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0907 00:38:30.709166   38715 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0907 00:38:30.709108   38715 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0907 00:38:30.709471   38715 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0907 00:38:30.709586   38715 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:38:30.709704   38715 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0907 00:38:30.726630   38715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0907 00:38:30.727199   38715 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:38:30.727680   38715 main.go:141] libmachine: Using API Version  1
	I0907 00:38:30.727708   38715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:38:30.728054   38715 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:38:30.728307   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:30.728467   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetState
	I0907 00:38:30.913926   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:38:30.997756   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0907 00:38:31.002711   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0907 00:38:31.002738   38715 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 294.973075ms
	I0907 00:38:31.002752   38715 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0907 00:38:31.014466   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0907 00:38:31.054443   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0907 00:38:31.060788   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0907 00:38:31.066065   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0907 00:38:31.085252   38715 cache.go:162] opening:  /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0907 00:38:31.186909   38715 fix.go:102] recreateIfNeeded on running-upgrade-395302: state=Running err=<nil>
	W0907 00:38:31.187256   38715 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:38:31.190520   38715 out.go:177] * Updating the running kvm2 "running-upgrade-395302" VM ...
	I0907 00:38:31.192220   38715 machine.go:88] provisioning docker machine ...
	I0907 00:38:31.192367   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:31.196547   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetMachineName
	I0907 00:38:31.198056   38715 buildroot.go:166] provisioning hostname "running-upgrade-395302"
	I0907 00:38:31.198186   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetMachineName
	I0907 00:38:31.198662   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:31.202875   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.202908   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:31.202931   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.202978   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:31.204572   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:31.204709   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:31.206395   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:31.208204   38715 main.go:141] libmachine: Using SSH client type: native
	I0907 00:38:31.208864   38715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I0907 00:38:31.208885   38715 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-395302 && echo "running-upgrade-395302" | sudo tee /etc/hostname
	I0907 00:38:31.428175   38715 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-395302
	
	I0907 00:38:31.428267   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:31.435106   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:31.435277   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.435306   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:31.435325   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.435367   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:31.437763   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:31.437933   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:31.438095   38715 main.go:141] libmachine: Using SSH client type: native
	I0907 00:38:31.438683   38715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I0907 00:38:31.438706   38715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-395302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-395302/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-395302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:38:31.482384   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0907 00:38:31.482416   38715 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 774.950898ms
	I0907 00:38:31.482432   38715 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0907 00:38:31.607034   38715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:38:31.607078   38715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:38:31.607108   38715 buildroot.go:174] setting up certificates
	I0907 00:38:31.607187   38715 provision.go:83] configureAuth start
	I0907 00:38:31.607221   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetMachineName
	I0907 00:38:31.607733   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetIP
	I0907 00:38:31.611199   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.611643   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:31.611682   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.612008   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:31.615417   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.615705   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:31.615873   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.615991   38715 provision.go:138] copyHostCerts
	I0907 00:38:31.616050   38715 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:38:31.616065   38715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:38:31.616131   38715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:38:31.616261   38715 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:38:31.616280   38715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:38:31.616311   38715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:38:31.616384   38715 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:38:31.616398   38715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:38:31.616423   38715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:38:31.616492   38715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-395302 san=[192.168.50.169 192.168.50.169 localhost 127.0.0.1 minikube running-upgrade-395302]
	I0907 00:38:31.659096   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0907 00:38:31.659130   38715 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 951.373599ms
	I0907 00:38:31.659146   38715 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0907 00:38:31.817853   38715 provision.go:172] copyRemoteCerts
	I0907 00:38:31.817921   38715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:38:31.817954   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:31.821753   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.821787   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:31.821813   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:31.821856   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:31.822095   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:31.822233   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:31.822326   38715 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/running-upgrade-395302/id_rsa Username:docker}
	I0907 00:38:31.884174   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0907 00:38:31.884207   38715 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.176706763s
	I0907 00:38:31.884223   38715 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0907 00:38:31.924434   38715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:38:31.937444   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0907 00:38:31.937467   38715 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.230051881s
	I0907 00:38:31.937480   38715 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0907 00:38:31.946930   38715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:38:31.981065   38715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:38:32.000491   38715 provision.go:86] duration metric: configureAuth took 393.274325ms
	I0907 00:38:32.000559   38715 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:38:32.000787   38715 config.go:182] Loaded profile config "running-upgrade-395302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0907 00:38:32.000888   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:32.004410   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.006895   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:32.006905   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.006938   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.007097   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.007269   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.007436   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:32.007607   38715 main.go:141] libmachine: Using SSH client type: native
	I0907 00:38:32.008217   38715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I0907 00:38:32.008239   38715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:38:32.337599   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0907 00:38:32.337618   38715 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.630248666s
	I0907 00:38:32.337628   38715 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0907 00:38:32.477008   38715 cache.go:157] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0907 00:38:32.477042   38715 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.769629454s
	I0907 00:38:32.477065   38715 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0907 00:38:32.477088   38715 cache.go:87] Successfully saved all images to host disk.
	I0907 00:38:32.630201   38715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:38:32.630231   38715 machine.go:91] provisioned docker machine in 1.437881384s
	I0907 00:38:32.630243   38715 start.go:300] post-start starting for "running-upgrade-395302" (driver="kvm2")
	I0907 00:38:32.630255   38715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:38:32.630292   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:32.630612   38715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:38:32.630640   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:32.633753   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.634126   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.634158   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.634329   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:32.634519   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.634687   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:32.634857   38715 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/running-upgrade-395302/id_rsa Username:docker}
	I0907 00:38:32.732682   38715 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:38:32.737552   38715 info.go:137] Remote host: Buildroot 2019.02.7
	I0907 00:38:32.737579   38715 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:38:32.737659   38715 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:38:32.737754   38715 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:38:32.737878   38715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:38:32.744344   38715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:38:32.763578   38715 start.go:303] post-start completed in 133.321096ms
	I0907 00:38:32.763598   38715 fix.go:56] fixHost completed within 2.055950897s
	I0907 00:38:32.763617   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:32.766755   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.767183   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.767220   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.767409   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:32.767603   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.767768   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.767933   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:32.768092   38715 main.go:141] libmachine: Using SSH client type: native
	I0907 00:38:32.768623   38715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I0907 00:38:32.768638   38715 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0907 00:38:32.903817   38715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047112.900526173
	
	I0907 00:38:32.903840   38715 fix.go:206] guest clock: 1694047112.900526173
	I0907 00:38:32.903863   38715 fix.go:219] Guest: 2023-09-07 00:38:32.900526173 +0000 UTC Remote: 2023-09-07 00:38:32.763601445 +0000 UTC m=+2.679650403 (delta=136.924728ms)
	I0907 00:38:32.903891   38715 fix.go:190] guest clock delta is within tolerance: 136.924728ms
	I0907 00:38:32.903900   38715 start.go:83] releasing machines lock for "running-upgrade-395302", held for 2.196269377s
	I0907 00:38:32.903932   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:32.904237   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetIP
	I0907 00:38:32.907429   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.907855   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.907896   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.907975   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:32.908434   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:32.908605   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .DriverName
	I0907 00:38:32.908688   38715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:38:32.908741   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:32.908850   38715 ssh_runner.go:195] Run: cat /version.json
	I0907 00:38:32.908874   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHHostname
	I0907 00:38:32.911911   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.912147   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.912318   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.912370   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.912477   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:83:03", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-07 01:36:41 +0000 UTC Type:0 Mac:52:54:00:b8:83:03 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:running-upgrade-395302 Clientid:01:52:54:00:b8:83:03}
	I0907 00:38:32.912502   38715 main.go:141] libmachine: (running-upgrade-395302) DBG | domain running-upgrade-395302 has defined IP address 192.168.50.169 and MAC address 52:54:00:b8:83:03 in network minikube-net
	I0907 00:38:32.912542   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:32.912710   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.912843   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHPort
	I0907 00:38:32.912930   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:32.913023   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHKeyPath
	I0907 00:38:32.913098   38715 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/running-upgrade-395302/id_rsa Username:docker}
	I0907 00:38:32.913819   38715 main.go:141] libmachine: (running-upgrade-395302) Calling .GetSSHUsername
	I0907 00:38:32.913950   38715 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/running-upgrade-395302/id_rsa Username:docker}
	W0907 00:38:33.016908   38715 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0907 00:38:33.016976   38715 ssh_runner.go:195] Run: systemctl --version
	I0907 00:38:33.049080   38715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:38:33.171664   38715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:38:33.181638   38715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:38:33.181710   38715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:38:33.190572   38715 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:38:33.190650   38715 start.go:466] detecting cgroup driver to use...
	I0907 00:38:33.190722   38715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:38:33.234204   38715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:38:33.248152   38715 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:38:33.248215   38715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:38:33.260674   38715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:38:33.272602   38715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0907 00:38:33.287770   38715 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0907 00:38:33.287861   38715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:38:33.412468   38715 docker.go:212] disabling docker service ...
	I0907 00:38:33.412552   38715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:38:34.435715   38715 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.023133532s)
	I0907 00:38:34.435791   38715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:38:34.452807   38715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:38:34.562458   38715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:38:34.675979   38715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:38:34.694868   38715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:38:34.713316   38715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:38:34.713394   38715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:38:34.734136   38715 out.go:177] 
	W0907 00:38:34.735693   38715 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0907 00:38:34.735737   38715 out.go:239] * 
	* 
	W0907 00:38:34.737004   38715 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:38:34.740135   38715 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-395302 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-07 00:38:34.768935528 +0000 UTC m=+3653.490391069
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-395302 -n running-upgrade-395302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-395302 -n running-upgrade-395302: exit status 4 (307.559417ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:38:35.043991   38986 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-395302" does not appear in /home/jenkins/minikube-integration/17174-6470/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-395302" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-395302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-395302
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-395302: (1.393904829s)
--- FAIL: TestRunningBinaryUpgrade (152.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-294956 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-294956 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.370938547s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-294956] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-294956 in cluster pause-294956
	* Updating the running kvm2 "pause-294956" VM ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-294956" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:38:32.234984   38859 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:38:32.235216   38859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:38:32.235249   38859 out.go:309] Setting ErrFile to fd 2...
	I0907 00:38:32.235268   38859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:38:32.235477   38859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:38:32.236034   38859 out.go:303] Setting JSON to false
	I0907 00:38:32.237128   38859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4857,"bootTime":1694042256,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:38:32.237217   38859 start.go:138] virtualization: kvm guest
	I0907 00:38:32.239394   38859 out.go:177] * [pause-294956] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:38:32.241713   38859 notify.go:220] Checking for updates...
	I0907 00:38:32.241716   38859 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:38:32.243820   38859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:38:32.245371   38859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:38:32.247032   38859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:38:32.248651   38859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:38:32.251316   38859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:38:32.253879   38859 config.go:182] Loaded profile config "pause-294956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:38:32.254665   38859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:38:32.254756   38859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:38:32.271228   38859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0907 00:38:32.271693   38859 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:38:32.272321   38859 main.go:141] libmachine: Using API Version  1
	I0907 00:38:32.272347   38859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:38:32.272790   38859 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:38:32.272972   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:38:32.273235   38859 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:38:32.273604   38859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:38:32.273645   38859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:38:32.288932   38859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0907 00:38:32.289405   38859 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:38:32.290033   38859 main.go:141] libmachine: Using API Version  1
	I0907 00:38:32.290057   38859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:38:32.290527   38859 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:38:32.290730   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:38:32.337537   38859 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:38:32.339069   38859 start.go:298] selected driver: kvm2
	I0907 00:38:32.339086   38859 start.go:902] validating driver "kvm2" against &{Name:pause-294956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:38:32.339260   38859 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:38:32.339746   38859 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:32.339842   38859 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:38:32.355530   38859 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:38:32.356340   38859 cni.go:84] Creating CNI manager for ""
	I0907 00:38:32.356362   38859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:38:32.356379   38859 start_flags.go:321] config:
	{Name:pause-294956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-294956 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:38:32.356644   38859 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:38:32.358598   38859 out.go:177] * Starting control plane node pause-294956 in cluster pause-294956
	I0907 00:38:32.360890   38859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:38:32.360968   38859 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:38:32.361026   38859 cache.go:57] Caching tarball of preloaded images
	I0907 00:38:32.361153   38859 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:38:32.361164   38859 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:38:32.361478   38859 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/config.json ...
	I0907 00:38:32.361712   38859 start.go:365] acquiring machines lock for pause-294956: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:39:03.340218   38859 start.go:369] acquired machines lock for "pause-294956" in 30.978422666s
	I0907 00:39:03.340267   38859 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:39:03.340275   38859 fix.go:54] fixHost starting: 
	I0907 00:39:03.340740   38859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:03.340785   38859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:03.359837   38859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0907 00:39:03.360231   38859 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:03.360800   38859 main.go:141] libmachine: Using API Version  1
	I0907 00:39:03.360829   38859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:03.361160   38859 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:03.361337   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:03.361477   38859 main.go:141] libmachine: (pause-294956) Calling .GetState
	I0907 00:39:03.363029   38859 fix.go:102] recreateIfNeeded on pause-294956: state=Running err=<nil>
	W0907 00:39:03.363060   38859 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:39:03.365469   38859 out.go:177] * Updating the running kvm2 "pause-294956" VM ...
	I0907 00:39:03.366942   38859 machine.go:88] provisioning docker machine ...
	I0907 00:39:03.366974   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:03.367178   38859 main.go:141] libmachine: (pause-294956) Calling .GetMachineName
	I0907 00:39:03.367334   38859 buildroot.go:166] provisioning hostname "pause-294956"
	I0907 00:39:03.367352   38859 main.go:141] libmachine: (pause-294956) Calling .GetMachineName
	I0907 00:39:03.367479   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:03.370309   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.370848   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:03.370874   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.370968   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:03.371145   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:03.371303   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:03.371493   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:03.371658   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:03.372077   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:03.372092   38859 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-294956 && echo "pause-294956" | sudo tee /etc/hostname
	I0907 00:39:03.521034   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-294956
	
	I0907 00:39:03.521082   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:03.524495   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.524958   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:03.524992   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.525254   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:03.525469   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:03.525667   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:03.525811   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:03.526023   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:03.526500   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:03.526520   38859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-294956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-294956/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-294956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:39:03.659989   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:39:03.660010   38859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:39:03.660039   38859 buildroot.go:174] setting up certificates
	I0907 00:39:03.660049   38859 provision.go:83] configureAuth start
	I0907 00:39:03.660066   38859 main.go:141] libmachine: (pause-294956) Calling .GetMachineName
	I0907 00:39:03.660313   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:03.663066   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.663520   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:03.663559   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.663711   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:03.666025   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.666367   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:03.666401   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.666525   38859 provision.go:138] copyHostCerts
	I0907 00:39:03.666578   38859 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:39:03.666587   38859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:39:03.666639   38859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:39:03.666715   38859 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:39:03.666722   38859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:39:03.666740   38859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:39:03.666818   38859 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:39:03.666826   38859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:39:03.666845   38859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:39:03.666888   38859 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.pause-294956 san=[192.168.83.77 192.168.83.77 localhost 127.0.0.1 minikube pause-294956]
	I0907 00:39:03.899516   38859 provision.go:172] copyRemoteCerts
	I0907 00:39:03.899570   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:39:03.899592   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:03.902552   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.902898   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:03.902948   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:03.903043   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:03.903249   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:03.903432   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:03.903604   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:04.006530   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0907 00:39:04.033122   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:39:04.061170   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:39:04.088764   38859 provision.go:86] duration metric: configureAuth took 428.697121ms
	I0907 00:39:04.088792   38859 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:39:04.089086   38859 config.go:182] Loaded profile config "pause-294956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:39:04.089175   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:04.091847   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:04.092183   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:04.092216   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:04.092323   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:04.092565   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:04.092748   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:04.092889   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:04.093055   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:04.093464   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:04.093492   38859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:39:09.715974   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:39:09.716000   38859 machine.go:91] provisioned docker machine in 6.34903422s
	I0907 00:39:09.716012   38859 start.go:300] post-start starting for "pause-294956" (driver="kvm2")
	I0907 00:39:09.716024   38859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:39:09.716084   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.716572   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:39:09.716599   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.720063   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720538   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.720570   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720758   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.720970   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.721191   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.721361   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.813757   38859 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:39:09.818272   38859 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:39:09.818299   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:39:09.818364   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:39:09.818464   38859 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:39:09.818582   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:39:09.829380   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:09.852567   38859 start.go:303] post-start completed in 136.539395ms
	I0907 00:39:09.852591   38859 fix.go:56] fixHost completed within 6.512317401s
	I0907 00:39:09.852610   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.855487   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.855838   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.855871   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.856002   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.856223   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856403   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856553   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.856739   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:09.857113   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:09.857125   38859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0907 00:39:09.983615   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047149.980569124
	
	I0907 00:39:09.983635   38859 fix.go:206] guest clock: 1694047149.980569124
	I0907 00:39:09.983642   38859 fix.go:219] Guest: 2023-09-07 00:39:09.980569124 +0000 UTC Remote: 2023-09-07 00:39:09.852594691 +0000 UTC m=+37.654584514 (delta=127.974433ms)
	I0907 00:39:09.983659   38859 fix.go:190] guest clock delta is within tolerance: 127.974433ms
	I0907 00:39:09.983674   38859 start.go:83] releasing machines lock for "pause-294956", held for 6.643416087s
	I0907 00:39:09.983697   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.983968   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:09.986882   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987290   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.987330   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987474   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988078   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988257   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988324   38859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:39:09.988361   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.988485   38859 ssh_runner.go:195] Run: cat /version.json
	I0907 00:39:09.988514   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.991158   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991342   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991556   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991587   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991694   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991718   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991835   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.991921   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.992016   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992125   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992184   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992270   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992332   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.992422   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:10.103715   38859 ssh_runner.go:195] Run: systemctl --version
	I0907 00:39:10.110193   38859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:39:10.265449   38859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:39:10.271232   38859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:39:10.271336   38859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:39:10.280748   38859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:39:10.280772   38859 start.go:466] detecting cgroup driver to use...
	I0907 00:39:10.280823   38859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:39:10.296111   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:39:10.310045   38859 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:39:10.310103   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:39:10.325785   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:39:10.339992   38859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:39:10.482210   38859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:39:10.875961   38859 docker.go:212] disabling docker service ...
	I0907 00:39:10.876050   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:39:10.985146   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:39:11.020435   38859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:39:11.325443   38859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:39:11.620462   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:39:11.651188   38859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:39:11.690676   38859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:39:11.690749   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.713331   38859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:39:11.713409   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.735888   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.750169   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.768373   38859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:39:11.784803   38859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:39:11.801886   38859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:39:11.816664   38859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:39:12.058238   38859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:39:13.320854   38859 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.262576824s)
	I0907 00:39:13.320880   38859 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:39:13.320942   38859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:39:13.332937   38859 start.go:534] Will wait 60s for crictl version
	I0907 00:39:13.333018   38859 ssh_runner.go:195] Run: which crictl
	I0907 00:39:13.339493   38859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:39:13.406105   38859 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:39:13.406190   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.479873   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.539941   38859 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:39:13.541572   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:13.544263   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544700   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:13.544730   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544960   38859 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:39:13.549645   38859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:39:13.549696   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.581266   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.581287   38859 crio.go:415] Images already preloaded, skipping extraction
	I0907 00:39:13.581345   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.794694   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.794714   38859 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:39:13.794905   38859 ssh_runner.go:195] Run: crio config
	I0907 00:39:14.064040   38859 cni.go:84] Creating CNI manager for ""
	I0907 00:39:14.064067   38859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:39:14.064094   38859 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:39:14.064120   38859 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.77 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-294956 NodeName:pause-294956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:39:14.064357   38859 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-294956"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:39:14.064466   38859 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-294956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:39:14.064551   38859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:39:14.109096   38859 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:39:14.109176   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:39:14.125881   38859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0907 00:39:14.148322   38859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:39:14.187804   38859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0907 00:39:14.217920   38859 ssh_runner.go:195] Run: grep 192.168.83.77	control-plane.minikube.internal$ /etc/hosts
	I0907 00:39:14.225599   38859 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956 for IP: 192.168.83.77
	I0907 00:39:14.225629   38859 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:39:14.225777   38859 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:39:14.225828   38859 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:39:14.225924   38859 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/client.key
	I0907 00:39:14.226003   38859 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key.4ae8af40
	I0907 00:39:14.226057   38859 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key
	I0907 00:39:14.226195   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:39:14.226235   38859 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:39:14.226249   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:39:14.226285   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:39:14.226318   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:39:14.226345   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:39:14.226403   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:14.227158   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:39:14.269211   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:39:14.311009   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:39:14.365060   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:39:14.414904   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:39:14.457502   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:39:14.494350   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:39:14.530318   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:39:14.572224   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:39:14.617742   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:39:14.656379   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:39:14.711525   38859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:39:14.751567   38859 ssh_runner.go:195] Run: openssl version
	I0907 00:39:14.759855   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:39:14.783810   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796756   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796830   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.809870   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:39:14.827316   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:39:14.855011   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871063   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871137   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.906153   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:39:14.920170   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:39:14.939114   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948233   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948303   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.959504   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:39:14.974396   38859 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:39:14.984652   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:39:14.995492   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:39:15.002267   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:39:15.009529   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:39:15.017512   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:39:15.024828   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:39:15.031849   38859 kubeadm.go:404] StartCluster: {Name:pause-294956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:15.031990   38859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:39:15.032058   38859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:39:15.117361   38859 cri.go:89] found id: "bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac"
	I0907 00:39:15.117385   38859 cri.go:89] found id: "2b7b254d94b7014206733774169b22d53d3312359a323bb60853ae04b2a2fc31"
	I0907 00:39:15.117393   38859 cri.go:89] found id: "628ac1485aff494846808e0a39f3a015cac8ed064f64dbace59a80782f95cee2"
	I0907 00:39:15.117399   38859 cri.go:89] found id: "a27f0726cefa5115c02c120f29b4f47821916ac6caab862e03e9f0cb15234333"
	I0907 00:39:15.117441   38859 cri.go:89] found id: "3fb1eeb160abea30714fbbf94f48e6c659b9d39b41ee06690b8a3efe1e63f356"
	I0907 00:39:15.117450   38859 cri.go:89] found id: "4dbcb81e9e550322a617dccff3ec9cec6f06322798208999657ecbaa5198d21c"
	I0907 00:39:15.117469   38859 cri.go:89] found id: ""
	I0907 00:39:15.117519   38859 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-294956 -n pause-294956
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-294956 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-294956 logs -n 25: (1.444431868s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC | 07 Sep 23 00:34 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC | 07 Sep 23 00:35 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:36 UTC |
	| start   | -p offline-crio-315234         | offline-crio-315234       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-347596    | force-systemd-env-347596  | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-347596    | force-systemd-env-347596  | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:37 UTC |
	| start   | -p pause-294956 --memory=2048  | pause-294956              | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-315234         | offline-crio-315234       | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:37 UTC |
	| start   | -p cert-expiration-386196      | cert-expiration-386196    | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-395302      | running-upgrade-395302    | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:38 UTC |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:39 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-294956                | pause-294956              | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:39 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-395302      | running-upgrade-395302    | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:38 UTC |
	| start   | -p force-systemd-flag-949073   | force-systemd-flag-949073 | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340842 sudo    | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC | 07 Sep 23 00:39 UTC |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:39:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:39:08.644002   39423 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:39:08.644127   39423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:08.644130   39423 out.go:309] Setting ErrFile to fd 2...
	I0907 00:39:08.644133   39423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:08.644348   39423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:39:08.644875   39423 out.go:303] Setting JSON to false
	I0907 00:39:08.645755   39423 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4893,"bootTime":1694042256,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:39:08.645802   39423 start.go:138] virtualization: kvm guest
	I0907 00:39:08.648195   39423 out.go:177] * [NoKubernetes-340842] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:39:08.650246   39423 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:39:08.650299   39423 notify.go:220] Checking for updates...
	I0907 00:39:08.651744   39423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:39:08.653294   39423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:39:08.654907   39423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:08.656651   39423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:39:08.658228   39423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:39:08.660133   39423 config.go:182] Loaded profile config "NoKubernetes-340842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0907 00:39:08.660657   39423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:08.660705   39423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:08.675437   39423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0907 00:39:08.675807   39423 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:08.676333   39423 main.go:141] libmachine: Using API Version  1
	I0907 00:39:08.676349   39423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:08.676713   39423 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:08.676887   39423 main.go:141] libmachine: (NoKubernetes-340842) Calling .DriverName
	I0907 00:39:08.677085   39423 start.go:1720] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0907 00:39:08.677100   39423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:39:08.677375   39423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:08.677408   39423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:08.691750   39423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0907 00:39:08.692124   39423 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:08.692607   39423 main.go:141] libmachine: Using API Version  1
	I0907 00:39:08.692627   39423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:08.692939   39423 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:08.693098   39423 main.go:141] libmachine: (NoKubernetes-340842) Calling .DriverName
	I0907 00:39:08.728325   39423 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:39:08.729741   39423 start.go:298] selected driver: kvm2
	I0907 00:39:08.729750   39423 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-340842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-340842 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:08.729854   39423 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:39:08.730137   39423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:39:08.730186   39423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:39:08.743898   39423 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:39:08.744836   39423 cni.go:84] Creating CNI manager for ""
	I0907 00:39:08.744850   39423 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:39:08.744859   39423 start_flags.go:321] config:
	{Name:NoKubernetes-340842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-340842 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:08.745068   39423 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:39:08.748058   39423 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-340842
	I0907 00:39:09.983757   39088 start.go:369] acquired machines lock for "force-systemd-flag-949073" in 33.384274217s
	I0907 00:39:09.983804   39088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-949073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-949073 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:39:09.983893   39088 start.go:125] createHost starting for "" (driver="kvm2")
	I0907 00:39:09.985639   39088 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0907 00:39:09.985817   39088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:09.985871   39088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:10.005551   39088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0907 00:39:10.006103   39088 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:10.006753   39088 main.go:141] libmachine: Using API Version  1
	I0907 00:39:10.006812   39088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:10.007172   39088 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:10.007368   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .GetMachineName
	I0907 00:39:10.007524   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .DriverName
	I0907 00:39:10.007703   39088 start.go:159] libmachine.API.Create for "force-systemd-flag-949073" (driver="kvm2")
	I0907 00:39:10.007732   39088 client.go:168] LocalClient.Create starting
	I0907 00:39:10.007768   39088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 00:39:10.007806   39088 main.go:141] libmachine: Decoding PEM data...
	I0907 00:39:10.007828   39088 main.go:141] libmachine: Parsing certificate...
	I0907 00:39:10.007904   39088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 00:39:10.007929   39088 main.go:141] libmachine: Decoding PEM data...
	I0907 00:39:10.007949   39088 main.go:141] libmachine: Parsing certificate...
	I0907 00:39:10.007974   39088 main.go:141] libmachine: Running pre-create checks...
	I0907 00:39:10.007990   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .PreCreateCheck
	I0907 00:39:10.008296   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .GetConfigRaw
	I0907 00:39:10.008785   39088 main.go:141] libmachine: Creating machine...
	I0907 00:39:10.008807   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .Create
	I0907 00:39:10.008953   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating KVM machine...
	I0907 00:39:10.010120   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | found existing default KVM network
	I0907 00:39:10.011588   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.011400   39469 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:69:26} reservation:<nil>}
	I0907 00:39:10.012572   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.012498   39469 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dc090}
	I0907 00:39:10.018586   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | trying to create private KVM network mk-force-systemd-flag-949073 192.168.50.0/24...
	I0907 00:39:10.101119   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | private KVM network mk-force-systemd-flag-949073 192.168.50.0/24 created
	I0907 00:39:10.101160   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.101045   39469 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:10.101177   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 ...
	I0907 00:39:10.101207   39088 main.go:141] libmachine: (force-systemd-flag-949073) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 00:39:10.101237   39088 main.go:141] libmachine: (force-systemd-flag-949073) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 00:39:10.308846   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.308655   39469 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/id_rsa...
	I0907 00:39:10.358864   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.358684   39469 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/force-systemd-flag-949073.rawdisk...
	I0907 00:39:10.358905   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Writing magic tar header
	I0907 00:39:10.358920   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Writing SSH key tar header
	I0907 00:39:10.358933   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.358835   39469 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 ...
	I0907 00:39:10.358951   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073
	I0907 00:39:10.359046   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 00:39:10.359076   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 (perms=drwx------)
	I0907 00:39:10.359088   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:10.359103   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 00:39:10.359121   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:39:10.359135   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:39:10.359148   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins
	I0907 00:39:10.359166   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 00:39:10.359181   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home
	I0907 00:39:10.359195   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Skipping /home - not owner
	I0907 00:39:10.359210   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 00:39:10.359220   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:39:10.359236   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:39:10.359245   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating domain...
	I0907 00:39:10.360516   39088 main.go:141] libmachine: (force-systemd-flag-949073) define libvirt domain using xml: 
	I0907 00:39:10.360563   39088 main.go:141] libmachine: (force-systemd-flag-949073) <domain type='kvm'>
	I0907 00:39:10.360581   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <name>force-systemd-flag-949073</name>
	I0907 00:39:10.360602   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <memory unit='MiB'>2048</memory>
	I0907 00:39:10.360615   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <vcpu>2</vcpu>
	I0907 00:39:10.360627   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <features>
	I0907 00:39:10.360639   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <acpi/>
	I0907 00:39:10.360650   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <apic/>
	I0907 00:39:10.360663   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <pae/>
	I0907 00:39:10.360679   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.360692   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </features>
	I0907 00:39:10.360705   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <cpu mode='host-passthrough'>
	I0907 00:39:10.360720   39088 main.go:141] libmachine: (force-systemd-flag-949073)   
	I0907 00:39:10.360733   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </cpu>
	I0907 00:39:10.360750   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <os>
	I0907 00:39:10.360763   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <type>hvm</type>
	I0907 00:39:10.360774   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <boot dev='cdrom'/>
	I0907 00:39:10.360786   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <boot dev='hd'/>
	I0907 00:39:10.360805   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <bootmenu enable='no'/>
	I0907 00:39:10.360817   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </os>
	I0907 00:39:10.360827   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <devices>
	I0907 00:39:10.360845   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <disk type='file' device='cdrom'>
	I0907 00:39:10.360861   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/boot2docker.iso'/>
	I0907 00:39:10.360886   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target dev='hdc' bus='scsi'/>
	I0907 00:39:10.360898   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <readonly/>
	I0907 00:39:10.360910   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </disk>
	I0907 00:39:10.360920   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <disk type='file' device='disk'>
	I0907 00:39:10.360941   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:39:10.360989   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/force-systemd-flag-949073.rawdisk'/>
	I0907 00:39:10.361020   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target dev='hda' bus='virtio'/>
	I0907 00:39:10.361037   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </disk>
	I0907 00:39:10.361050   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <interface type='network'>
	I0907 00:39:10.361065   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source network='mk-force-systemd-flag-949073'/>
	I0907 00:39:10.361077   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <model type='virtio'/>
	I0907 00:39:10.361090   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </interface>
	I0907 00:39:10.361111   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <interface type='network'>
	I0907 00:39:10.361136   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source network='default'/>
	I0907 00:39:10.361154   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <model type='virtio'/>
	I0907 00:39:10.361169   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </interface>
	I0907 00:39:10.361192   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <serial type='pty'>
	I0907 00:39:10.361204   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target port='0'/>
	I0907 00:39:10.361217   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </serial>
	I0907 00:39:10.361229   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <console type='pty'>
	I0907 00:39:10.361250   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target type='serial' port='0'/>
	I0907 00:39:10.361264   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </console>
	I0907 00:39:10.361279   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <rng model='virtio'>
	I0907 00:39:10.361292   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <backend model='random'>/dev/random</backend>
	I0907 00:39:10.361305   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </rng>
	I0907 00:39:10.361320   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.361334   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.361346   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </devices>
	I0907 00:39:10.361359   39088 main.go:141] libmachine: (force-systemd-flag-949073) </domain>
	I0907 00:39:10.361370   39088 main.go:141] libmachine: (force-systemd-flag-949073) 
	I0907 00:39:10.365596   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:ef:95:d1 in network default
	I0907 00:39:10.366187   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring networks are active...
	I0907 00:39:10.366218   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:10.367039   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring network default is active
	I0907 00:39:10.367551   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring network mk-force-systemd-flag-949073 is active
	I0907 00:39:10.368250   39088 main.go:141] libmachine: (force-systemd-flag-949073) Getting domain xml...
	I0907 00:39:10.369072   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating domain...
	I0907 00:39:09.715974   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:39:09.716000   38859 machine.go:91] provisioned docker machine in 6.34903422s
	I0907 00:39:09.716012   38859 start.go:300] post-start starting for "pause-294956" (driver="kvm2")
	I0907 00:39:09.716024   38859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:39:09.716084   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.716572   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:39:09.716599   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.720063   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720538   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.720570   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720758   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.720970   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.721191   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.721361   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.813757   38859 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:39:09.818272   38859 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:39:09.818299   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:39:09.818364   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:39:09.818464   38859 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:39:09.818582   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:39:09.829380   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:09.852567   38859 start.go:303] post-start completed in 136.539395ms
	I0907 00:39:09.852591   38859 fix.go:56] fixHost completed within 6.512317401s
	I0907 00:39:09.852610   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.855487   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.855838   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.855871   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.856002   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.856223   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856403   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856553   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.856739   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:09.857113   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:09.857125   38859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:39:09.983615   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047149.980569124
	
	I0907 00:39:09.983635   38859 fix.go:206] guest clock: 1694047149.980569124
	I0907 00:39:09.983642   38859 fix.go:219] Guest: 2023-09-07 00:39:09.980569124 +0000 UTC Remote: 2023-09-07 00:39:09.852594691 +0000 UTC m=+37.654584514 (delta=127.974433ms)
	I0907 00:39:09.983659   38859 fix.go:190] guest clock delta is within tolerance: 127.974433ms
	I0907 00:39:09.983674   38859 start.go:83] releasing machines lock for "pause-294956", held for 6.643416087s
	I0907 00:39:09.983697   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.983968   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:09.986882   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987290   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.987330   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987474   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988078   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988257   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988324   38859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:39:09.988361   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.988485   38859 ssh_runner.go:195] Run: cat /version.json
	I0907 00:39:09.988514   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.991158   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991342   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991556   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991587   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991694   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991718   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991835   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.991921   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.992016   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992125   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992184   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992270   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992332   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.992422   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:10.103715   38859 ssh_runner.go:195] Run: systemctl --version
	I0907 00:39:10.110193   38859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:39:10.265449   38859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:39:10.271232   38859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:39:10.271336   38859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:39:10.280748   38859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:39:10.280772   38859 start.go:466] detecting cgroup driver to use...
	I0907 00:39:10.280823   38859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:39:10.296111   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:39:10.310045   38859 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:39:10.310103   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:39:10.325785   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:39:10.339992   38859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:39:10.482210   38859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:39:10.875961   38859 docker.go:212] disabling docker service ...
	I0907 00:39:10.876050   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:39:10.985146   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:39:11.020435   38859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:39:11.325443   38859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:39:11.620462   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:39:11.651188   38859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:39:11.690676   38859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:39:11.690749   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.713331   38859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:39:11.713409   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.735888   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.750169   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.768373   38859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:39:11.784803   38859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:39:11.801886   38859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:39:11.816664   38859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:39:12.058238   38859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:39:13.320854   38859 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.262576824s)
	I0907 00:39:13.320880   38859 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:39:13.320942   38859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:39:13.332937   38859 start.go:534] Will wait 60s for crictl version
	I0907 00:39:13.333018   38859 ssh_runner.go:195] Run: which crictl
	I0907 00:39:13.339493   38859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:39:13.406105   38859 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:39:13.406190   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.479873   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.539941   38859 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:39:08.749502   39423 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0907 00:39:08.863706   39423 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0907 00:39:08.863894   39423 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/NoKubernetes-340842/config.json ...
	I0907 00:39:08.864213   39423 start.go:365] acquiring machines lock for NoKubernetes-340842: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:39:11.676466   39088 main.go:141] libmachine: (force-systemd-flag-949073) Waiting to get IP...
	I0907 00:39:11.677383   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:11.677850   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:11.677881   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:11.677823   39469 retry.go:31] will retry after 270.37335ms: waiting for machine to come up
	I0907 00:39:11.950453   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:11.951132   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:11.951160   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:11.951073   39469 retry.go:31] will retry after 374.861139ms: waiting for machine to come up
	I0907 00:39:12.327645   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:12.328103   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:12.328133   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:12.328048   39469 retry.go:31] will retry after 360.414902ms: waiting for machine to come up
	I0907 00:39:12.689723   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:12.690301   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:12.690330   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:12.690247   39469 retry.go:31] will retry after 523.655417ms: waiting for machine to come up
	I0907 00:39:13.215882   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:13.216416   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:13.216457   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:13.216364   39469 retry.go:31] will retry after 688.28809ms: waiting for machine to come up
	I0907 00:39:13.906112   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:13.906723   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:13.906791   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:13.906659   39469 retry.go:31] will retry after 683.446024ms: waiting for machine to come up
	I0907 00:39:14.591621   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:14.592163   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:14.592190   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:14.592114   39469 retry.go:31] will retry after 728.832955ms: waiting for machine to come up
	I0907 00:39:15.322514   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:15.323107   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:15.323133   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:15.323045   39469 retry.go:31] will retry after 1.000252465s: waiting for machine to come up
	I0907 00:39:16.324905   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:16.325332   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:16.325360   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:16.325296   39469 retry.go:31] will retry after 1.455092303s: waiting for machine to come up
	I0907 00:39:13.541572   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:13.544263   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544700   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:13.544730   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544960   38859 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:39:13.549645   38859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:39:13.549696   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.581266   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.581287   38859 crio.go:415] Images already preloaded, skipping extraction
	I0907 00:39:13.581345   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.794694   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.794714   38859 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:39:13.794905   38859 ssh_runner.go:195] Run: crio config
	I0907 00:39:14.064040   38859 cni.go:84] Creating CNI manager for ""
	I0907 00:39:14.064067   38859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:39:14.064094   38859 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:39:14.064120   38859 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.77 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-294956 NodeName:pause-294956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:39:14.064357   38859 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-294956"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:39:14.064466   38859 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-294956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:39:14.064551   38859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:39:14.109096   38859 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:39:14.109176   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:39:14.125881   38859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0907 00:39:14.148322   38859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:39:14.187804   38859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0907 00:39:14.217920   38859 ssh_runner.go:195] Run: grep 192.168.83.77	control-plane.minikube.internal$ /etc/hosts
	I0907 00:39:14.225599   38859 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956 for IP: 192.168.83.77
	I0907 00:39:14.225629   38859 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:39:14.225777   38859 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:39:14.225828   38859 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:39:14.225924   38859 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/client.key
	I0907 00:39:14.226003   38859 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key.4ae8af40
	I0907 00:39:14.226057   38859 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key
	I0907 00:39:14.226195   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:39:14.226235   38859 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:39:14.226249   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:39:14.226285   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:39:14.226318   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:39:14.226345   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:39:14.226403   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:14.227158   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:39:14.269211   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:39:14.311009   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:39:14.365060   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:39:14.414904   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:39:14.457502   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:39:14.494350   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:39:14.530318   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:39:14.572224   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:39:14.617742   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:39:14.656379   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:39:14.711525   38859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:39:14.751567   38859 ssh_runner.go:195] Run: openssl version
	I0907 00:39:14.759855   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:39:14.783810   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796756   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796830   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.809870   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:39:14.827316   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:39:14.855011   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871063   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871137   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.906153   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:39:14.920170   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:39:14.939114   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948233   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948303   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.959504   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:39:14.974396   38859 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:39:14.984652   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:39:14.995492   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:39:15.002267   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:39:15.009529   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:39:15.017512   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:39:15.024828   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:39:15.031849   38859 kubeadm.go:404] StartCluster: {Name:pause-294956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:15.031990   38859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:39:15.032058   38859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:39:15.117361   38859 cri.go:89] found id: "bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac"
	I0907 00:39:15.117385   38859 cri.go:89] found id: "2b7b254d94b7014206733774169b22d53d3312359a323bb60853ae04b2a2fc31"
	I0907 00:39:15.117393   38859 cri.go:89] found id: "628ac1485aff494846808e0a39f3a015cac8ed064f64dbace59a80782f95cee2"
	I0907 00:39:15.117399   38859 cri.go:89] found id: "a27f0726cefa5115c02c120f29b4f47821916ac6caab862e03e9f0cb15234333"
	I0907 00:39:15.117441   38859 cri.go:89] found id: "3fb1eeb160abea30714fbbf94f48e6c659b9d39b41ee06690b8a3efe1e63f356"
	I0907 00:39:15.117450   38859 cri.go:89] found id: "4dbcb81e9e550322a617dccff3ec9cec6f06322798208999657ecbaa5198d21c"
	I0907 00:39:15.117469   38859 cri.go:89] found id: ""
	I0907 00:39:15.117519   38859 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:37:39 UTC, ends at Thu 2023-09-07 00:39:56 UTC. --
	Sep 07 00:39:55 pause-294956 crio[2454]: time="2023-09-07 00:39:55.731314838Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-j58nv,Uid:f65c3cb4-c02a-42b1-abad-251a71700f77,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153826312391,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:38:26.482040380Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-294956,Uid:75b5f5b91fd0ece08743fa5e2dd7e632,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153788522899,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 75b5f5b91fd0ece08743fa5e2dd7e632,kubernetes.io/config.seen: 2023-09-07T00:38:13.061248830Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-294956,Uid:192e77cf6689352b167fe51298ee6394,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153775027399,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf66893
52b167fe51298ee6394,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.77:8443,kubernetes.io/config.hash: 192e77cf6689352b167fe51298ee6394,kubernetes.io/config.seen: 2023-09-07T00:38:13.061247958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-294956,Uid:caf5e1cca6b54902b1874c92a0ef3fcf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153762200470,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: caf5e1cca6b54902b1874c92a0ef3fcf,kubernetes.io/config.seen: 2023-09-07T00:38:13.061239229Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j29lr,Uid:dec135a8-27ad-43d9-93c4-3e4eabc42c38,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153712290389,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:38:26.276984774Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&PodSandboxMetadata{Name:etcd-pause-294956,Uid:974f1fb5208e329e5f58a15afed38f1e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153657621579,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.77:2379,kubernetes.io/config.hash: 974f1fb5208e329e5f58a15afed38f1e,kubernetes.io/config.seen: 2023-09-07T00:38:13.061244021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-294956,Uid:192e77cf6689352b167fe51298ee6394,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694047150698307683,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.
io/kube-apiserver.advertise-address.endpoint: 192.168.83.77:8443,kubernetes.io/config.hash: 192e77cf6689352b167fe51298ee6394,kubernetes.io/config.seen: 2023-09-07T00:38:13.061247958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=abd2fa98-d5ad-499d-ac90-6a46e45659c9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 00:39:55 pause-294956 crio[2454]: time="2023-09-07 00:39:55.731895623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e8bf426-0dcb-4ef5-a389-c353cd26fb6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:55 pause-294956 crio[2454]: time="2023-09-07 00:39:55.731993448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1e8bf426-0dcb-4ef5-a389-c353cd26fb6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:55 pause-294956 crio[2454]: time="2023-09-07 00:39:55.732252316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1e8bf426-0dcb-4ef5-a389-c353cd26fb6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.091389062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f000e70-9156-4ba4-bbc5-661211d3b371 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.091458844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f000e70-9156-4ba4-bbc5-661211d3b371 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.091832121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f000e70-9156-4ba4-bbc5-661211d3b371 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.139780121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f8b36b0c-aee3-4d92-bd34-27f2e834b81c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.139941041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f8b36b0c-aee3-4d92-bd34-27f2e834b81c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.140323206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f8b36b0c-aee3-4d92-bd34-27f2e834b81c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.188584342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=581eb766-84fa-41cc-b49e-3e0a549e5770 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.188801350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=581eb766-84fa-41cc-b49e-3e0a549e5770 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.189240217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=581eb766-84fa-41cc-b49e-3e0a549e5770 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.245956720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22f543ac-0edd-4a18-8db9-76dbd4a80593 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.246087374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22f543ac-0edd-4a18-8db9-76dbd4a80593 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.246479390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22f543ac-0edd-4a18-8db9-76dbd4a80593 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.294631621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=14fe82f9-8069-4c16-b18e-6869ba4a7532 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.294780506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=14fe82f9-8069-4c16-b18e-6869ba4a7532 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.295030712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=14fe82f9-8069-4c16-b18e-6869ba4a7532 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.349013250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=135ea661-5c10-4ce3-aff9-6ed2a8f808a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.349128802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=135ea661-5c10-4ce3-aff9-6ed2a8f808a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.349438746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=135ea661-5c10-4ce3-aff9-6ed2a8f808a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.391850382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e556e54b-d6b5-4dc7-a4de-be836814a2f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.391922672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e556e54b-d6b5-4dc7-a4de-be836814a2f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:56 pause-294956 crio[2454]: time="2023-09-07 00:39:56.392226126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e556e54b-d6b5-4dc7-a4de-be836814a2f9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3888ced35bdb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago      Running             coredns                   2                   f21ec60625ac4
	adc92da2ee793       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   16 seconds ago      Running             kube-proxy                2                   4dd8047fc599e
	f5ebce5d89b13       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   23 seconds ago      Running             kube-controller-manager   2                   8cf74ae206756
	1f0f0d5e46b6f       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   23 seconds ago      Running             kube-scheduler            2                   a36967f009b35
	4053360c364f7       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   23 seconds ago      Running             kube-apiserver            2                   63db80dc47213
	ab1fff32bb510       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   9c51ce14ed2fe
	3cff45c1dfebe       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   39 seconds ago      Exited              kube-proxy                1                   4dd8047fc599e
	3583867756c5f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   40 seconds ago      Exited              coredns                   1                   f21ec60625ac4
	a9324d6118d79       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   41 seconds ago      Exited              kube-scheduler            1                   a36967f009b35
	ca98dc7a487b0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   41 seconds ago      Exited              kube-controller-manager   1                   8cf74ae206756
	236ddb18f44a7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   41 seconds ago      Exited              etcd                      1                   9c51ce14ed2fe
	bd227a5fe0785       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   45 seconds ago      Exited              kube-apiserver            1                   f5cfb69eaac4a
	
	* 
	* ==> coredns [3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51769 - 40955 "HINFO IN 7592813259722698913.3056877890313329857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014735734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54377 - 8523 "HINFO IN 395551073806208309.8535443809114678030. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027027567s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-294956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-294956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=pause-294956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_38_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:38:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-294956
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.77
	  Hostname:    pause-294956
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d955061976547ac8046729678eaf845
	  System UUID:                8d955061-9765-47ac-8046-729678eaf845
	  Boot ID:                    a2ac8bbe-aaa1-4c44-a4f2-9547ea315c00
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j58nv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-pause-294956                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-pause-294956             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-pause-294956    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-j29lr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-pause-294956             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)  kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                103s                 kubelet          Node pause-294956 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           90s                  node-controller  Node pause-294956 event: Registered Node pause-294956 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-294956 event: Registered Node pause-294956 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070852] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.775060] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.170208] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.271043] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.698954] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.134360] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.149232] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.103190] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.227586] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Sep 7 00:38] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +9.348455] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Sep 7 00:39] systemd-fstab-generator[2079]: Ignoring "noauto" for root device
	[  +0.129051] systemd-fstab-generator[2090]: Ignoring "noauto" for root device
	[  +0.085032] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.529352] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.281116] systemd-fstab-generator[2296]: Ignoring "noauto" for root device
	[  +0.461704] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[ +20.295351] systemd-fstab-generator[3323]: Ignoring "noauto" for root device
	[  +8.364156] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8] <==
	* {"level":"info","ts":"2023-09-07T00:39:16.798288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:18.456914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.456978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.457019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.457034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.459927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:18.460159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:18.461229Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2023-09-07T00:39:18.461326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:18.461368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:18.461347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:39:18.459971Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-294956 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:39:30.685959Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-07T00:39:30.686055Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-294956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	{"level":"warn","ts":"2023-09-07T00:39:30.68619Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.686263Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.688217Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.688344Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-07T00:39:30.688455Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a3b04ba9ccd2eedd","current-leader-member-id":"a3b04ba9ccd2eedd"}
	{"level":"info","ts":"2023-09-07T00:39:30.692316Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:30.692585Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:30.692626Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-294956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	
	* 
	* ==> etcd [ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76] <==
	* {"level":"info","ts":"2023-09-07T00:39:35.035159Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:39:35.035194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:39:35.035439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd switched to configuration voters=(11795010616741261021)"}
	{"level":"info","ts":"2023-09-07T00:39:35.035529Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","added-peer-id":"a3b04ba9ccd2eedd","added-peer-peer-urls":["https://192.168.83.77:2380"]}
	{"level":"info","ts":"2023-09-07T00:39:35.035667Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:39:35.03571Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:39:35.047683Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-07T00:39:35.047875Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:35.048069Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:35.050684Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-07T00:39:35.05061Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a3b04ba9ccd2eedd","initial-advertise-peer-urls":["https://192.168.83.77:2380"],"listen-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-07T00:39:36.801937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.808073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-294956 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:39:36.80822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:36.808503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:36.808644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:36.808812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:36.8102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:39:36.810708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	
	* 
	* ==> kernel <==
	*  00:39:56 up 2 min,  0 users,  load average: 2.17, 0.87, 0.33
	Linux pause-294956 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db] <==
	* I0907 00:39:38.819886       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0907 00:39:38.886616       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0907 00:39:38.886812       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0907 00:39:38.992260       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0907 00:39:38.997042       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0907 00:39:39.017583       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0907 00:39:39.023318       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0907 00:39:39.023696       1 aggregator.go:166] initial CRD sync complete...
	I0907 00:39:39.023828       1 autoregister_controller.go:141] Starting autoregister controller
	I0907 00:39:39.023856       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0907 00:39:39.023882       1 cache.go:39] Caches are synced for autoregister controller
	I0907 00:39:39.027336       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0907 00:39:39.027413       1 shared_informer.go:318] Caches are synced for configmaps
	I0907 00:39:39.027471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0907 00:39:39.028582       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0907 00:39:39.028659       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0907 00:39:39.043428       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0907 00:39:39.843323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0907 00:39:40.780291       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0907 00:39:40.810712       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0907 00:39:40.868347       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0907 00:39:40.910338       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0907 00:39:40.930201       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0907 00:39:51.834609       1 controller.go:624] quota admission added evaluator for: endpoints
	I0907 00:39:51.937518       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac] <==
	* 
	* 
	* ==> kube-controller-manager [ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342] <==
	* I0907 00:39:16.824299       1 serving.go:348] Generated self-signed cert in-memory
	I0907 00:39:17.024105       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0907 00:39:17.024153       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:17.026158       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0907 00:39:17.026683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0907 00:39:17.026855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:39:17.026943       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0907 00:39:27.028994       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.77:8443/healthz\": dial tcp 192.168.83.77:8443: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae] <==
	* I0907 00:39:51.726837       1 shared_informer.go:318] Caches are synced for expand
	I0907 00:39:51.728025       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0907 00:39:51.728212       1 shared_informer.go:318] Caches are synced for service account
	I0907 00:39:51.728940       1 shared_informer.go:318] Caches are synced for cronjob
	I0907 00:39:51.732415       1 shared_informer.go:318] Caches are synced for GC
	I0907 00:39:51.732488       1 shared_informer.go:318] Caches are synced for PVC protection
	I0907 00:39:51.735147       1 shared_informer.go:318] Caches are synced for deployment
	I0907 00:39:51.741942       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0907 00:39:51.742256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="193.317µs"
	I0907 00:39:51.755086       1 shared_informer.go:318] Caches are synced for ephemeral
	I0907 00:39:51.765675       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0907 00:39:51.765860       1 shared_informer.go:318] Caches are synced for attach detach
	I0907 00:39:51.769076       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0907 00:39:51.769138       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0907 00:39:51.769151       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0907 00:39:51.821326       1 shared_informer.go:318] Caches are synced for endpoint
	I0907 00:39:51.821504       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0907 00:39:51.826280       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0907 00:39:51.829885       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0907 00:39:51.850194       1 shared_informer.go:318] Caches are synced for resource quota
	I0907 00:39:51.852491       1 shared_informer.go:318] Caches are synced for resource quota
	I0907 00:39:51.854899       1 shared_informer.go:318] Caches are synced for disruption
	I0907 00:39:52.279566       1 shared_informer.go:318] Caches are synced for garbage collector
	I0907 00:39:52.279625       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0907 00:39:52.294039       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3] <==
	* I0907 00:39:17.513176       1 server_others.go:69] "Using iptables proxy"
	E0907 00:39:17.516831       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:18.613132       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:20.821567       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.354353       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6] <==
	* I0907 00:39:40.262294       1 server_others.go:69] "Using iptables proxy"
	I0907 00:39:40.277077       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0907 00:39:40.329964       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:39:40.330026       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:39:40.339620       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:39:40.339710       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:39:40.339962       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:39:40.339998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:40.341581       1 config.go:188] "Starting service config controller"
	I0907 00:39:40.341638       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:39:40.341666       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:39:40.341672       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:39:40.342428       1 config.go:315] "Starting node config controller"
	I0907 00:39:40.342495       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:39:40.442344       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0907 00:39:40.442412       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:39:40.442623       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14] <==
	* I0907 00:39:35.841525       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:39:38.943251       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:39:38.943427       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:39:38.943537       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:39:38.943565       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:39:39.004543       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:39:39.005048       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:39.007238       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:39:39.007306       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:39:39.008581       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:39:39.010888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:39:39.107801       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf] <==
	* E0907 00:39:25.610838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.83.77:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:25.867242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.83.77:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.867382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.83.77:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:25.897429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.897493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.263491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.83.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.263571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.83.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.313391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.83.77:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.313540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.83.77:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.328328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.83.77:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.328444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.83.77:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.554703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.77:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.554972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.77:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.663923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.663986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.019353       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.019422       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.290932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.291005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.408814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.83.77:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.408883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.83.77:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:30.632383       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0907 00:39:30.633562       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0907 00:39:30.633678       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0907 00:39:30.634392       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:37:39 UTC, ends at Thu 2023-09-07 00:39:57 UTC. --
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.236227    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-294956&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.236299    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-294956&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.326181    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.326241    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.844693    3329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-294956?timeout=10s\": dial tcp 192.168.83.77:8443: connect: connection refused" interval="1.6s"
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.907884    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.907947    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.933073    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.933126    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: I0907 00:39:33.959675    3329 kubelet_node_status.go:70] "Attempting to register node" node="pause-294956"
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.960134    3329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.77:8443: connect: connection refused" node="pause-294956"
	Sep 07 00:39:35 pause-294956 kubelet[3329]: I0907 00:39:35.562125    3329 kubelet_node_status.go:70] "Attempting to register node" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.098707    3329 kubelet_node_status.go:108] "Node was previously registered" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.098896    3329 kubelet_node_status.go:73] "Successfully registered node" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.102189    3329 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.103404    3329 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.418307    3329 apiserver.go:52] "Watching apiserver"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: E0907 00:39:39.421012    3329 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-294956\" already exists" pod="kube-system/kube-controller-manager-pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.422812    3329 topology_manager.go:215] "Topology Admit Handler" podUID="f65c3cb4-c02a-42b1-abad-251a71700f77" podNamespace="kube-system" podName="coredns-5dd5756b68-j58nv"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.424144    3329 topology_manager.go:215] "Topology Admit Handler" podUID="dec135a8-27ad-43d9-93c4-3e4eabc42c38" podNamespace="kube-system" podName="kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.427004    3329 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.440411    3329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dec135a8-27ad-43d9-93c4-3e4eabc42c38-lib-modules\") pod \"kube-proxy-j29lr\" (UID: \"dec135a8-27ad-43d9-93c4-3e4eabc42c38\") " pod="kube-system/kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.440843    3329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dec135a8-27ad-43d9-93c4-3e4eabc42c38-xtables-lock\") pod \"kube-proxy-j29lr\" (UID: \"dec135a8-27ad-43d9-93c4-3e4eabc42c38\") " pod="kube-system/kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.725523    3329 scope.go:117] "RemoveContainer" containerID="3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.726135    3329 scope.go:117] "RemoveContainer" containerID="3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:39:55.919187   39827 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17174-6470/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-294956 -n pause-294956
helpers_test.go:261: (dbg) Run:  kubectl --context pause-294956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-294956 -n pause-294956
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-294956 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-294956 logs -n 25: (1.432904647s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:34 UTC | 07 Sep 23 00:34 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:35 UTC | 07 Sep 23 00:35 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-825679       | scheduled-stop-825679     | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:36 UTC |
	| start   | -p offline-crio-315234         | offline-crio-315234       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-347596    | force-systemd-env-347596  | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:36 UTC | 07 Sep 23 00:37 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-347596    | force-systemd-env-347596  | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:37 UTC |
	| start   | -p pause-294956 --memory=2048  | pause-294956              | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-315234         | offline-crio-315234       | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:37 UTC |
	| start   | -p cert-expiration-386196      | cert-expiration-386196    | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:37 UTC | 07 Sep 23 00:38 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-395302      | running-upgrade-395302    | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:38 UTC |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:39 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-294956                | pause-294956              | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:39 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-395302      | running-upgrade-395302    | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC | 07 Sep 23 00:38 UTC |
	| start   | -p force-systemd-flag-949073   | force-systemd-flag-949073 | jenkins | v1.31.2 | 07 Sep 23 00:38 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-340842 sudo    | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC | 07 Sep 23 00:39 UTC |
	| start   | -p NoKubernetes-340842         | NoKubernetes-340842       | jenkins | v1.31.2 | 07 Sep 23 00:39 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:39:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:39:08.644002   39423 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:39:08.644127   39423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:08.644130   39423 out.go:309] Setting ErrFile to fd 2...
	I0907 00:39:08.644133   39423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:08.644348   39423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:39:08.644875   39423 out.go:303] Setting JSON to false
	I0907 00:39:08.645755   39423 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4893,"bootTime":1694042256,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:39:08.645802   39423 start.go:138] virtualization: kvm guest
	I0907 00:39:08.648195   39423 out.go:177] * [NoKubernetes-340842] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:39:08.650246   39423 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:39:08.650299   39423 notify.go:220] Checking for updates...
	I0907 00:39:08.651744   39423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:39:08.653294   39423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:39:08.654907   39423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:08.656651   39423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:39:08.658228   39423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:39:08.660133   39423 config.go:182] Loaded profile config "NoKubernetes-340842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0907 00:39:08.660657   39423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:08.660705   39423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:08.675437   39423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0907 00:39:08.675807   39423 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:08.676333   39423 main.go:141] libmachine: Using API Version  1
	I0907 00:39:08.676349   39423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:08.676713   39423 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:08.676887   39423 main.go:141] libmachine: (NoKubernetes-340842) Calling .DriverName
	I0907 00:39:08.677085   39423 start.go:1720] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0907 00:39:08.677100   39423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:39:08.677375   39423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:08.677408   39423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:08.691750   39423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0907 00:39:08.692124   39423 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:08.692607   39423 main.go:141] libmachine: Using API Version  1
	I0907 00:39:08.692627   39423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:08.692939   39423 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:08.693098   39423 main.go:141] libmachine: (NoKubernetes-340842) Calling .DriverName
	I0907 00:39:08.728325   39423 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:39:08.729741   39423 start.go:298] selected driver: kvm2
	I0907 00:39:08.729750   39423 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-340842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-340842 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:08.729854   39423 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:39:08.730137   39423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:39:08.730186   39423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:39:08.743898   39423 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:39:08.744836   39423 cni.go:84] Creating CNI manager for ""
	I0907 00:39:08.744850   39423 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:39:08.744859   39423 start_flags.go:321] config:
	{Name:NoKubernetes-340842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-340842 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:08.745068   39423 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:39:08.748058   39423 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-340842
	I0907 00:39:09.983757   39088 start.go:369] acquired machines lock for "force-systemd-flag-949073" in 33.384274217s
	I0907 00:39:09.983804   39088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-949073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-949073 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:39:09.983893   39088 start.go:125] createHost starting for "" (driver="kvm2")
	I0907 00:39:09.985639   39088 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0907 00:39:09.985817   39088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:39:09.985871   39088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:10.005551   39088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0907 00:39:10.006103   39088 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:10.006753   39088 main.go:141] libmachine: Using API Version  1
	I0907 00:39:10.006812   39088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:10.007172   39088 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:10.007368   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .GetMachineName
	I0907 00:39:10.007524   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .DriverName
	I0907 00:39:10.007703   39088 start.go:159] libmachine.API.Create for "force-systemd-flag-949073" (driver="kvm2")
	I0907 00:39:10.007732   39088 client.go:168] LocalClient.Create starting
	I0907 00:39:10.007768   39088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 00:39:10.007806   39088 main.go:141] libmachine: Decoding PEM data...
	I0907 00:39:10.007828   39088 main.go:141] libmachine: Parsing certificate...
	I0907 00:39:10.007904   39088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 00:39:10.007929   39088 main.go:141] libmachine: Decoding PEM data...
	I0907 00:39:10.007949   39088 main.go:141] libmachine: Parsing certificate...
	I0907 00:39:10.007974   39088 main.go:141] libmachine: Running pre-create checks...
	I0907 00:39:10.007990   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .PreCreateCheck
	I0907 00:39:10.008296   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .GetConfigRaw
	I0907 00:39:10.008785   39088 main.go:141] libmachine: Creating machine...
	I0907 00:39:10.008807   39088 main.go:141] libmachine: (force-systemd-flag-949073) Calling .Create
	I0907 00:39:10.008953   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating KVM machine...
	I0907 00:39:10.010120   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | found existing default KVM network
	I0907 00:39:10.011588   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.011400   39469 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:69:26} reservation:<nil>}
	I0907 00:39:10.012572   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.012498   39469 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dc090}
	I0907 00:39:10.018586   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | trying to create private KVM network mk-force-systemd-flag-949073 192.168.50.0/24...
	I0907 00:39:10.101119   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | private KVM network mk-force-systemd-flag-949073 192.168.50.0/24 created
	I0907 00:39:10.101160   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.101045   39469 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:10.101177   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 ...
	I0907 00:39:10.101207   39088 main.go:141] libmachine: (force-systemd-flag-949073) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 00:39:10.101237   39088 main.go:141] libmachine: (force-systemd-flag-949073) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 00:39:10.308846   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.308655   39469 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/id_rsa...
	I0907 00:39:10.358864   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.358684   39469 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/force-systemd-flag-949073.rawdisk...
	I0907 00:39:10.358905   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Writing magic tar header
	I0907 00:39:10.358920   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Writing SSH key tar header
	I0907 00:39:10.358933   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:10.358835   39469 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 ...
	I0907 00:39:10.358951   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073
	I0907 00:39:10.359046   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 00:39:10.359076   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073 (perms=drwx------)
	I0907 00:39:10.359088   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:39:10.359103   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 00:39:10.359121   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:39:10.359135   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:39:10.359148   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home/jenkins
	I0907 00:39:10.359166   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 00:39:10.359181   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Checking permissions on dir: /home
	I0907 00:39:10.359195   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | Skipping /home - not owner
	I0907 00:39:10.359210   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 00:39:10.359220   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:39:10.359236   39088 main.go:141] libmachine: (force-systemd-flag-949073) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:39:10.359245   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating domain...
	I0907 00:39:10.360516   39088 main.go:141] libmachine: (force-systemd-flag-949073) define libvirt domain using xml: 
	I0907 00:39:10.360563   39088 main.go:141] libmachine: (force-systemd-flag-949073) <domain type='kvm'>
	I0907 00:39:10.360581   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <name>force-systemd-flag-949073</name>
	I0907 00:39:10.360602   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <memory unit='MiB'>2048</memory>
	I0907 00:39:10.360615   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <vcpu>2</vcpu>
	I0907 00:39:10.360627   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <features>
	I0907 00:39:10.360639   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <acpi/>
	I0907 00:39:10.360650   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <apic/>
	I0907 00:39:10.360663   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <pae/>
	I0907 00:39:10.360679   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.360692   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </features>
	I0907 00:39:10.360705   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <cpu mode='host-passthrough'>
	I0907 00:39:10.360720   39088 main.go:141] libmachine: (force-systemd-flag-949073)   
	I0907 00:39:10.360733   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </cpu>
	I0907 00:39:10.360750   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <os>
	I0907 00:39:10.360763   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <type>hvm</type>
	I0907 00:39:10.360774   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <boot dev='cdrom'/>
	I0907 00:39:10.360786   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <boot dev='hd'/>
	I0907 00:39:10.360805   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <bootmenu enable='no'/>
	I0907 00:39:10.360817   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </os>
	I0907 00:39:10.360827   39088 main.go:141] libmachine: (force-systemd-flag-949073)   <devices>
	I0907 00:39:10.360845   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <disk type='file' device='cdrom'>
	I0907 00:39:10.360861   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/boot2docker.iso'/>
	I0907 00:39:10.360886   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target dev='hdc' bus='scsi'/>
	I0907 00:39:10.360898   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <readonly/>
	I0907 00:39:10.360910   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </disk>
	I0907 00:39:10.360920   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <disk type='file' device='disk'>
	I0907 00:39:10.360941   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:39:10.360989   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/force-systemd-flag-949073/force-systemd-flag-949073.rawdisk'/>
	I0907 00:39:10.361020   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target dev='hda' bus='virtio'/>
	I0907 00:39:10.361037   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </disk>
	I0907 00:39:10.361050   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <interface type='network'>
	I0907 00:39:10.361065   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source network='mk-force-systemd-flag-949073'/>
	I0907 00:39:10.361077   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <model type='virtio'/>
	I0907 00:39:10.361090   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </interface>
	I0907 00:39:10.361111   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <interface type='network'>
	I0907 00:39:10.361136   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <source network='default'/>
	I0907 00:39:10.361154   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <model type='virtio'/>
	I0907 00:39:10.361169   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </interface>
	I0907 00:39:10.361192   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <serial type='pty'>
	I0907 00:39:10.361204   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target port='0'/>
	I0907 00:39:10.361217   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </serial>
	I0907 00:39:10.361229   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <console type='pty'>
	I0907 00:39:10.361250   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <target type='serial' port='0'/>
	I0907 00:39:10.361264   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </console>
	I0907 00:39:10.361279   39088 main.go:141] libmachine: (force-systemd-flag-949073)     <rng model='virtio'>
	I0907 00:39:10.361292   39088 main.go:141] libmachine: (force-systemd-flag-949073)       <backend model='random'>/dev/random</backend>
	I0907 00:39:10.361305   39088 main.go:141] libmachine: (force-systemd-flag-949073)     </rng>
	I0907 00:39:10.361320   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.361334   39088 main.go:141] libmachine: (force-systemd-flag-949073)     
	I0907 00:39:10.361346   39088 main.go:141] libmachine: (force-systemd-flag-949073)   </devices>
	I0907 00:39:10.361359   39088 main.go:141] libmachine: (force-systemd-flag-949073) </domain>
	I0907 00:39:10.361370   39088 main.go:141] libmachine: (force-systemd-flag-949073) 
	I0907 00:39:10.365596   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:ef:95:d1 in network default
	I0907 00:39:10.366187   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring networks are active...
	I0907 00:39:10.366218   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:10.367039   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring network default is active
	I0907 00:39:10.367551   39088 main.go:141] libmachine: (force-systemd-flag-949073) Ensuring network mk-force-systemd-flag-949073 is active
	I0907 00:39:10.368250   39088 main.go:141] libmachine: (force-systemd-flag-949073) Getting domain xml...
	I0907 00:39:10.369072   39088 main.go:141] libmachine: (force-systemd-flag-949073) Creating domain...
	I0907 00:39:09.715974   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:39:09.716000   38859 machine.go:91] provisioned docker machine in 6.34903422s
	I0907 00:39:09.716012   38859 start.go:300] post-start starting for "pause-294956" (driver="kvm2")
	I0907 00:39:09.716024   38859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:39:09.716084   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.716572   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:39:09.716599   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.720063   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720538   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.720570   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.720758   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.720970   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.721191   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.721361   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.813757   38859 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:39:09.818272   38859 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:39:09.818299   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:39:09.818364   38859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:39:09.818464   38859 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:39:09.818582   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:39:09.829380   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:09.852567   38859 start.go:303] post-start completed in 136.539395ms
	I0907 00:39:09.852591   38859 fix.go:56] fixHost completed within 6.512317401s
	I0907 00:39:09.852610   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.855487   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.855838   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.855871   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.856002   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.856223   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856403   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.856553   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.856739   38859 main.go:141] libmachine: Using SSH client type: native
	I0907 00:39:09.857113   38859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0907 00:39:09.857125   38859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:39:09.983615   38859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047149.980569124
	
	I0907 00:39:09.983635   38859 fix.go:206] guest clock: 1694047149.980569124
	I0907 00:39:09.983642   38859 fix.go:219] Guest: 2023-09-07 00:39:09.980569124 +0000 UTC Remote: 2023-09-07 00:39:09.852594691 +0000 UTC m=+37.654584514 (delta=127.974433ms)
	I0907 00:39:09.983659   38859 fix.go:190] guest clock delta is within tolerance: 127.974433ms
	I0907 00:39:09.983674   38859 start.go:83] releasing machines lock for "pause-294956", held for 6.643416087s
	I0907 00:39:09.983697   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.983968   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:09.986882   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987290   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.987330   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.987474   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988078   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988257   38859 main.go:141] libmachine: (pause-294956) Calling .DriverName
	I0907 00:39:09.988324   38859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:39:09.988361   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.988485   38859 ssh_runner.go:195] Run: cat /version.json
	I0907 00:39:09.988514   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHHostname
	I0907 00:39:09.991158   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991342   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991556   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991587   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991694   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:09.991718   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:09.991835   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.991921   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHPort
	I0907 00:39:09.992016   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992125   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHKeyPath
	I0907 00:39:09.992184   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992270   38859 main.go:141] libmachine: (pause-294956) Calling .GetSSHUsername
	I0907 00:39:09.992332   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:09.992422   38859 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/pause-294956/id_rsa Username:docker}
	I0907 00:39:10.103715   38859 ssh_runner.go:195] Run: systemctl --version
	I0907 00:39:10.110193   38859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:39:10.265449   38859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:39:10.271232   38859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:39:10.271336   38859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:39:10.280748   38859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:39:10.280772   38859 start.go:466] detecting cgroup driver to use...
	I0907 00:39:10.280823   38859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:39:10.296111   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:39:10.310045   38859 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:39:10.310103   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:39:10.325785   38859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:39:10.339992   38859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:39:10.482210   38859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:39:10.875961   38859 docker.go:212] disabling docker service ...
	I0907 00:39:10.876050   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:39:10.985146   38859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:39:11.020435   38859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:39:11.325443   38859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:39:11.620462   38859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:39:11.651188   38859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:39:11.690676   38859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:39:11.690749   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.713331   38859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:39:11.713409   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.735888   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.750169   38859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:39:11.768373   38859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:39:11.784803   38859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:39:11.801886   38859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:39:11.816664   38859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:39:12.058238   38859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:39:13.320854   38859 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.262576824s)
	I0907 00:39:13.320880   38859 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:39:13.320942   38859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:39:13.332937   38859 start.go:534] Will wait 60s for crictl version
	I0907 00:39:13.333018   38859 ssh_runner.go:195] Run: which crictl
	I0907 00:39:13.339493   38859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:39:13.406105   38859 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:39:13.406190   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.479873   38859 ssh_runner.go:195] Run: crio --version
	I0907 00:39:13.539941   38859 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:39:08.749502   39423 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0907 00:39:08.863706   39423 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0907 00:39:08.863894   39423 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/NoKubernetes-340842/config.json ...
	I0907 00:39:08.864213   39423 start.go:365] acquiring machines lock for NoKubernetes-340842: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:39:11.676466   39088 main.go:141] libmachine: (force-systemd-flag-949073) Waiting to get IP...
	I0907 00:39:11.677383   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:11.677850   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:11.677881   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:11.677823   39469 retry.go:31] will retry after 270.37335ms: waiting for machine to come up
	I0907 00:39:11.950453   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:11.951132   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:11.951160   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:11.951073   39469 retry.go:31] will retry after 374.861139ms: waiting for machine to come up
	I0907 00:39:12.327645   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:12.328103   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:12.328133   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:12.328048   39469 retry.go:31] will retry after 360.414902ms: waiting for machine to come up
	I0907 00:39:12.689723   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:12.690301   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:12.690330   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:12.690247   39469 retry.go:31] will retry after 523.655417ms: waiting for machine to come up
	I0907 00:39:13.215882   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:13.216416   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:13.216457   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:13.216364   39469 retry.go:31] will retry after 688.28809ms: waiting for machine to come up
	I0907 00:39:13.906112   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:13.906723   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:13.906791   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:13.906659   39469 retry.go:31] will retry after 683.446024ms: waiting for machine to come up
	I0907 00:39:14.591621   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:14.592163   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:14.592190   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:14.592114   39469 retry.go:31] will retry after 728.832955ms: waiting for machine to come up
	I0907 00:39:15.322514   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:15.323107   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:15.323133   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:15.323045   39469 retry.go:31] will retry after 1.000252465s: waiting for machine to come up
	I0907 00:39:16.324905   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | domain force-systemd-flag-949073 has defined MAC address 52:54:00:4b:2b:90 in network mk-force-systemd-flag-949073
	I0907 00:39:16.325332   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | unable to find current IP address of domain force-systemd-flag-949073 in network mk-force-systemd-flag-949073
	I0907 00:39:16.325360   39088 main.go:141] libmachine: (force-systemd-flag-949073) DBG | I0907 00:39:16.325296   39469 retry.go:31] will retry after 1.455092303s: waiting for machine to come up
	I0907 00:39:13.541572   38859 main.go:141] libmachine: (pause-294956) Calling .GetIP
	I0907 00:39:13.544263   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544700   38859 main.go:141] libmachine: (pause-294956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:6f:0c", ip: ""} in network mk-pause-294956: {Iface:virbr3 ExpiryTime:2023-09-07 01:37:43 +0000 UTC Type:0 Mac:52:54:00:79:6f:0c Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-294956 Clientid:01:52:54:00:79:6f:0c}
	I0907 00:39:13.544730   38859 main.go:141] libmachine: (pause-294956) DBG | domain pause-294956 has defined IP address 192.168.83.77 and MAC address 52:54:00:79:6f:0c in network mk-pause-294956
	I0907 00:39:13.544960   38859 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:39:13.549645   38859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:39:13.549696   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.581266   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.581287   38859 crio.go:415] Images already preloaded, skipping extraction
	I0907 00:39:13.581345   38859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:39:13.794694   38859 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:39:13.794714   38859 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:39:13.794905   38859 ssh_runner.go:195] Run: crio config
	I0907 00:39:14.064040   38859 cni.go:84] Creating CNI manager for ""
	I0907 00:39:14.064067   38859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:39:14.064094   38859 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:39:14.064120   38859 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.77 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-294956 NodeName:pause-294956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:39:14.064357   38859 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-294956"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:39:14.064466   38859 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-294956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:39:14.064551   38859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:39:14.109096   38859 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:39:14.109176   38859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:39:14.125881   38859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0907 00:39:14.148322   38859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:39:14.187804   38859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0907 00:39:14.217920   38859 ssh_runner.go:195] Run: grep 192.168.83.77	control-plane.minikube.internal$ /etc/hosts
	I0907 00:39:14.225599   38859 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956 for IP: 192.168.83.77
	I0907 00:39:14.225629   38859 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:39:14.225777   38859 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:39:14.225828   38859 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:39:14.225924   38859 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/client.key
	I0907 00:39:14.226003   38859 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key.4ae8af40
	I0907 00:39:14.226057   38859 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key
	I0907 00:39:14.226195   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:39:14.226235   38859 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:39:14.226249   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:39:14.226285   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:39:14.226318   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:39:14.226345   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:39:14.226403   38859 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:39:14.227158   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:39:14.269211   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:39:14.311009   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:39:14.365060   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/pause-294956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:39:14.414904   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:39:14.457502   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:39:14.494350   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:39:14.530318   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:39:14.572224   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:39:14.617742   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:39:14.656379   38859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:39:14.711525   38859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:39:14.751567   38859 ssh_runner.go:195] Run: openssl version
	I0907 00:39:14.759855   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:39:14.783810   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796756   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.796830   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:39:14.809870   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:39:14.827316   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:39:14.855011   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871063   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.871137   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:39:14.906153   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:39:14.920170   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:39:14.939114   38859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948233   38859 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.948303   38859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:39:14.959504   38859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:39:14.974396   38859 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:39:14.984652   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:39:14.995492   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:39:15.002267   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:39:15.009529   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:39:15.017512   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:39:15.024828   38859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:39:15.031849   38859 kubeadm.go:404] StartCluster: {Name:pause-294956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-294956 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:39:15.031990   38859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:39:15.032058   38859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:39:15.117361   38859 cri.go:89] found id: "bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac"
	I0907 00:39:15.117385   38859 cri.go:89] found id: "2b7b254d94b7014206733774169b22d53d3312359a323bb60853ae04b2a2fc31"
	I0907 00:39:15.117393   38859 cri.go:89] found id: "628ac1485aff494846808e0a39f3a015cac8ed064f64dbace59a80782f95cee2"
	I0907 00:39:15.117399   38859 cri.go:89] found id: "a27f0726cefa5115c02c120f29b4f47821916ac6caab862e03e9f0cb15234333"
	I0907 00:39:15.117441   38859 cri.go:89] found id: "3fb1eeb160abea30714fbbf94f48e6c659b9d39b41ee06690b8a3efe1e63f356"
	I0907 00:39:15.117450   38859 cri.go:89] found id: "4dbcb81e9e550322a617dccff3ec9cec6f06322798208999657ecbaa5198d21c"
	I0907 00:39:15.117469   38859 cri.go:89] found id: ""
	I0907 00:39:15.117519   38859 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:37:39 UTC, ends at Thu 2023-09-07 00:39:58 UTC. --
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.219200597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8356bb8d-eac7-4f36-ae7f-75a3e6e0b6ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.219446256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8356bb8d-eac7-4f36-ae7f-75a3e6e0b6ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.266676723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=19b539b8-bc83-4249-a7c6-1e7d14564a4b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.266872099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=19b539b8-bc83-4249-a7c6-1e7d14564a4b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.267159924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=19b539b8-bc83-4249-a7c6-1e7d14564a4b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.317423684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66d78a35-4a82-4571-aeef-252a1b47bd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.317526682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66d78a35-4a82-4571-aeef-252a1b47bd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.317848428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66d78a35-4a82-4571-aeef-252a1b47bd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.360959758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3fb7c0f1-1bfd-4094-a911-62289aba51a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.361055791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3fb7c0f1-1bfd-4094-a911-62289aba51a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.361588373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fb7c0f1-1bfd-4094-a911-62289aba51a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.407603155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c1868fe-19db-463c-bc44-e699e5804179 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.407698377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c1868fe-19db-463c-bc44-e699e5804179 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.408173310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c1868fe-19db-463c-bc44-e699e5804179 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.452223296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c795938-11f4-4ee5-98bb-d1176f4b8365 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.452290345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c795938-11f4-4ee5-98bb-d1176f4b8365 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.453078153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c795938-11f4-4ee5-98bb-d1176f4b8365 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.490568683Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6182d109-dc6a-435f-aa03-78cb5a321596 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.490871003Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-j58nv,Uid:f65c3cb4-c02a-42b1-abad-251a71700f77,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153826312391,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:38:26.482040380Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-294956,Uid:75b5f5b91fd0ece08743fa5e2dd7e632,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153788522899,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 75b5f5b91fd0ece08743fa5e2dd7e632,kubernetes.io/config.seen: 2023-09-07T00:38:13.061248830Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-294956,Uid:192e77cf6689352b167fe51298ee6394,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153775027399,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf66893
52b167fe51298ee6394,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.77:8443,kubernetes.io/config.hash: 192e77cf6689352b167fe51298ee6394,kubernetes.io/config.seen: 2023-09-07T00:38:13.061247958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-294956,Uid:caf5e1cca6b54902b1874c92a0ef3fcf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153762200470,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: caf5e1cca6b54902b1874c92a0ef3fcf,kubernetes.io/config.seen: 2023-09-07T00:38:13.061239229Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j29lr,Uid:dec135a8-27ad-43d9-93c4-3e4eabc42c38,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153712290389,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:38:26.276984774Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&PodSandboxMetadata{Name:etcd-pause-294956,Uid:974f1fb5208e329e5f58a15afed38f1e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694047153657621579,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.77:2379,kubernetes.io/config.hash: 974f1fb5208e329e5f58a15afed38f1e,kubernetes.io/config.seen: 2023-09-07T00:38:13.061244021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6182d109-dc6a-435f-aa03-78cb5a321596 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.491877946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=670ca0e9-7216-422a-a880-e266a9dd1467 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.491961058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=670ca0e9-7216-422a-a880-e266a9dd1467 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.492156791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=670ca0e9-7216-422a-a880-e266a9dd1467 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.510619173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47c4f55c-2be5-43c4-8c46-ebe2d572506b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.510767357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47c4f55c-2be5-43c4-8c46-ebe2d572506b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 00:39:58 pause-294956 crio[2454]: time="2023-09-07 00:39:58.511009741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047179807578513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047179773467894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash: a87b3bbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047173185997429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1c
ca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047173206554622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047173131793290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db,PodSandboxId:63db80dc4721351f02d83d9ef9b2e968da6b9c5a54d82303907f65df56b71fdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047173151079067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf6689352b167fe51298ee6394,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3,PodSandboxId:4dd8047fc599e2a6e3a8b6c26c452de0adbbd51a8c1abf057356166b9a57855a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694047157283583586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j29lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec135a8-27ad-43d9-93c4-3e4eabc42c38,},Annotations:map[string]string{io.kubernetes.container.hash:
a87b3bbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3,PodSandboxId:f21ec60625ac49497236d9e31a9533b786ddfdc009ad389870aed032049f8ab9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694047156187096086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j58nv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65c3cb4-c02a-42b1-abad-251a71700f77,},Annotations:map[string]string{io.kubernetes.container.hash: d44b11f9,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf,PodSandboxId:a36967f009b352aa0a124b1ee9e94a5f22135f97101f37bf3b288037e57e216d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694047155459786579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf5e1cca6b54902b1874c92a0ef3fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342,PodSandboxId:8cf74ae206756d47a6a2f7e7d644e8a0f698e377a5ecd56dfe44df104fd12c06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_EXITED,CreatedAt:1694047155124292117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b5f5b91fd0ece08743fa5e2dd7e632,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8,PodSandboxId:9c51ce14ed2feeb61c2571f0a630bbf81c1438460749c19f6a0bf1e082494156,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694047154721476785,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-294956,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 974f1fb5208e329e5f58a15afed38f1e,},Annotations:map[string]string{io.kubernetes.container.hash: c196df3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac,PodSandboxId:f5cfb69eaac4ac3d052ad79118a732b5c380e0db2c908eb38d1efafd28a49615,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694047151382953438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-294956,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192e77cf
6689352b167fe51298ee6394,},Annotations:map[string]string{io.kubernetes.container.hash: 1e6630b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47c4f55c-2be5-43c4-8c46-ebe2d572506b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3888ced35bdb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago      Running             coredns                   2                   f21ec60625ac4
	adc92da2ee793       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   18 seconds ago      Running             kube-proxy                2                   4dd8047fc599e
	f5ebce5d89b13       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   25 seconds ago      Running             kube-controller-manager   2                   8cf74ae206756
	1f0f0d5e46b6f       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   25 seconds ago      Running             kube-scheduler            2                   a36967f009b35
	4053360c364f7       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   25 seconds ago      Running             kube-apiserver            2                   63db80dc47213
	ab1fff32bb510       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   9c51ce14ed2fe
	3cff45c1dfebe       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   41 seconds ago      Exited              kube-proxy                1                   4dd8047fc599e
	3583867756c5f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   42 seconds ago      Exited              coredns                   1                   f21ec60625ac4
	a9324d6118d79       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   43 seconds ago      Exited              kube-scheduler            1                   a36967f009b35
	ca98dc7a487b0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   43 seconds ago      Exited              kube-controller-manager   1                   8cf74ae206756
	236ddb18f44a7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   43 seconds ago      Exited              etcd                      1                   9c51ce14ed2fe
	bd227a5fe0785       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   47 seconds ago      Exited              kube-apiserver            1                   f5cfb69eaac4a
	
	* 
	* ==> coredns [3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51769 - 40955 "HINFO IN 7592813259722698913.3056877890313329857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014735734s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [c3888ced35bdbb54f7049a216367d42077a4aa7889273a176d4bfe3aec5340db] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54377 - 8523 "HINFO IN 395551073806208309.8535443809114678030. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027027567s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-294956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-294956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=pause-294956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_38_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:38:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-294956
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 00:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 00:39:39 +0000   Thu, 07 Sep 2023 00:38:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.77
	  Hostname:    pause-294956
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d955061976547ac8046729678eaf845
	  System UUID:                8d955061-9765-47ac-8046-729678eaf845
	  Boot ID:                    a2ac8bbe-aaa1-4c44-a4f2-9547ea315c00
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j58nv                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 etcd-pause-294956                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kube-apiserver-pause-294956             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-pause-294956    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-proxy-j29lr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-pause-294956             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                105s                 kubelet          Node pause-294956 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           92s                  node-controller  Node pause-294956 event: Registered Node pause-294956 in Controller
	  Normal  Starting                 26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)    kubelet          Node pause-294956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)    kubelet          Node pause-294956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)    kubelet          Node pause-294956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-294956 event: Registered Node pause-294956 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070852] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.775060] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.170208] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.271043] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.698954] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.134360] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.149232] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.103190] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.227586] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Sep 7 00:38] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +9.348455] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Sep 7 00:39] systemd-fstab-generator[2079]: Ignoring "noauto" for root device
	[  +0.129051] systemd-fstab-generator[2090]: Ignoring "noauto" for root device
	[  +0.085032] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.529352] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.281116] systemd-fstab-generator[2296]: Ignoring "noauto" for root device
	[  +0.461704] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[ +20.295351] systemd-fstab-generator[3323]: Ignoring "noauto" for root device
	[  +8.364156] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [236ddb18f44a72c66fba59ec0010cfdeae0a5e509f126c46d1d0c8ee3f5a58b8] <==
	* {"level":"info","ts":"2023-09-07T00:39:16.798288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:18.456914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.456978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.457019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 2"}
	{"level":"info","ts":"2023-09-07T00:39:18.457034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.457063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:18.459927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:18.460159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:18.461229Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2023-09-07T00:39:18.461326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:18.461368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:18.461347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:39:18.459971Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-294956 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:39:30.685959Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-07T00:39:30.686055Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-294956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	{"level":"warn","ts":"2023-09-07T00:39:30.68619Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.686263Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.688217Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-07T00:39:30.688344Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-07T00:39:30.688455Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a3b04ba9ccd2eedd","current-leader-member-id":"a3b04ba9ccd2eedd"}
	{"level":"info","ts":"2023-09-07T00:39:30.692316Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:30.692585Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:30.692626Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-294956","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	
	* 
	* ==> etcd [ab1fff32bb5104574d383e62655a47c0d9f3459151efe688e12fac802223ea76] <==
	* {"level":"info","ts":"2023-09-07T00:39:35.035159Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:39:35.035194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:39:35.035439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd switched to configuration voters=(11795010616741261021)"}
	{"level":"info","ts":"2023-09-07T00:39:35.035529Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","added-peer-id":"a3b04ba9ccd2eedd","added-peer-peer-urls":["https://192.168.83.77:2380"]}
	{"level":"info","ts":"2023-09-07T00:39:35.035667Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:39:35.03571Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:39:35.047683Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-07T00:39:35.047875Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:35.048069Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2023-09-07T00:39:35.050684Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-07T00:39:35.05061Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a3b04ba9ccd2eedd","initial-advertise-peer-urls":["https://192.168.83.77:2380"],"listen-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-07T00:39:36.801937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2023-09-07T00:39:36.802068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.802109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2023-09-07T00:39:36.808073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-294956 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:39:36.80822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:36.808503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:36.808644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:39:36.808812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:39:36.8102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:39:36.810708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	
	* 
	* ==> kernel <==
	*  00:39:58 up 2 min,  0 users,  load average: 2.17, 0.87, 0.33
	Linux pause-294956 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4053360c364f7f1c9fb39ff8a36bb28bac899c339b510fea0d08796eda87f1db] <==
	* I0907 00:39:38.819886       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0907 00:39:38.886616       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0907 00:39:38.886812       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0907 00:39:38.992260       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0907 00:39:38.997042       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0907 00:39:39.017583       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0907 00:39:39.023318       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0907 00:39:39.023696       1 aggregator.go:166] initial CRD sync complete...
	I0907 00:39:39.023828       1 autoregister_controller.go:141] Starting autoregister controller
	I0907 00:39:39.023856       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0907 00:39:39.023882       1 cache.go:39] Caches are synced for autoregister controller
	I0907 00:39:39.027336       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0907 00:39:39.027413       1 shared_informer.go:318] Caches are synced for configmaps
	I0907 00:39:39.027471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0907 00:39:39.028582       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0907 00:39:39.028659       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0907 00:39:39.043428       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0907 00:39:39.843323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0907 00:39:40.780291       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0907 00:39:40.810712       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0907 00:39:40.868347       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0907 00:39:40.910338       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0907 00:39:40.930201       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0907 00:39:51.834609       1 controller.go:624] quota admission added evaluator for: endpoints
	I0907 00:39:51.937518       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [bd227a5fe07852606fc7a910e40c4be3f46be497884c2d484516d629e5b726ac] <==
	* 
	* 
	* ==> kube-controller-manager [ca98dc7a487b03d513a02aaca0e25101dafd44716be975082b3d1b28a4a71342] <==
	* I0907 00:39:16.824299       1 serving.go:348] Generated self-signed cert in-memory
	I0907 00:39:17.024105       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0907 00:39:17.024153       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:17.026158       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0907 00:39:17.026683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0907 00:39:17.026855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:39:17.026943       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0907 00:39:27.028994       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.77:8443/healthz\": dial tcp 192.168.83.77:8443: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [f5ebce5d89b13cf6e24544cb6663a4795e17467dc3b98b3cc50b6d9ca7a4e0ae] <==
	* I0907 00:39:51.726837       1 shared_informer.go:318] Caches are synced for expand
	I0907 00:39:51.728025       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0907 00:39:51.728212       1 shared_informer.go:318] Caches are synced for service account
	I0907 00:39:51.728940       1 shared_informer.go:318] Caches are synced for cronjob
	I0907 00:39:51.732415       1 shared_informer.go:318] Caches are synced for GC
	I0907 00:39:51.732488       1 shared_informer.go:318] Caches are synced for PVC protection
	I0907 00:39:51.735147       1 shared_informer.go:318] Caches are synced for deployment
	I0907 00:39:51.741942       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0907 00:39:51.742256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="193.317µs"
	I0907 00:39:51.755086       1 shared_informer.go:318] Caches are synced for ephemeral
	I0907 00:39:51.765675       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0907 00:39:51.765860       1 shared_informer.go:318] Caches are synced for attach detach
	I0907 00:39:51.769076       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0907 00:39:51.769138       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0907 00:39:51.769151       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0907 00:39:51.821326       1 shared_informer.go:318] Caches are synced for endpoint
	I0907 00:39:51.821504       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0907 00:39:51.826280       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0907 00:39:51.829885       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0907 00:39:51.850194       1 shared_informer.go:318] Caches are synced for resource quota
	I0907 00:39:51.852491       1 shared_informer.go:318] Caches are synced for resource quota
	I0907 00:39:51.854899       1 shared_informer.go:318] Caches are synced for disruption
	I0907 00:39:52.279566       1 shared_informer.go:318] Caches are synced for garbage collector
	I0907 00:39:52.279625       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0907 00:39:52.294039       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3] <==
	* I0907 00:39:17.513176       1 server_others.go:69] "Using iptables proxy"
	E0907 00:39:17.516831       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:18.613132       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:20.821567       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.354353       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-294956": dial tcp 192.168.83.77:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [adc92da2ee7930b93ac9ecca02d5463d339123a8e5bb1a483b6b178e893bb8a6] <==
	* I0907 00:39:40.262294       1 server_others.go:69] "Using iptables proxy"
	I0907 00:39:40.277077       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0907 00:39:40.329964       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:39:40.330026       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:39:40.339620       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:39:40.339710       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:39:40.339962       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:39:40.339998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:40.341581       1 config.go:188] "Starting service config controller"
	I0907 00:39:40.341638       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:39:40.341666       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:39:40.341672       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:39:40.342428       1 config.go:315] "Starting node config controller"
	I0907 00:39:40.342495       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:39:40.442344       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0907 00:39:40.442412       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:39:40.442623       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1f0f0d5e46b6f68fe5b618ec34c93e36fef69e317811df37f33f2d63ca0f9a14] <==
	* I0907 00:39:35.841525       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:39:38.943251       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:39:38.943427       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:39:38.943537       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:39:38.943565       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:39:39.004543       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:39:39.005048       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:39:39.007238       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:39:39.007306       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:39:39.008581       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:39:39.010888       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:39:39.107801       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a9324d6118d79a76c0ec56cb8418f9dfaae72fa6d4698a950483f66a8bc12dbf] <==
	* E0907 00:39:25.610838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.83.77:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:25.867242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.83.77:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.867382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.83.77:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:25.897429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:25.897493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.263491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.83.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.263571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.83.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.313391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.83.77:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.313540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.83.77:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.328328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.83.77:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.328444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.83.77:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.554703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.77:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.554972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.77:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:26.663923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:26.663986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.83.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.019353       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.019422       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.290932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.291005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.83.77:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	W0907 00:39:27.408814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.83.77:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:27.408883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.83.77:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	E0907 00:39:30.632383       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0907 00:39:30.633562       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0907 00:39:30.633678       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0907 00:39:30.634392       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:37:39 UTC, ends at Thu 2023-09-07 00:39:59 UTC. --
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.236227    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-294956&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.236299    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-294956&limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.326181    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.326241    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.844693    3329 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-294956?timeout=10s\": dial tcp 192.168.83.77:8443: connect: connection refused" interval="1.6s"
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.907884    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.907947    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: W0907 00:39:33.933073    3329 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.933126    3329 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Sep 07 00:39:33 pause-294956 kubelet[3329]: I0907 00:39:33.959675    3329 kubelet_node_status.go:70] "Attempting to register node" node="pause-294956"
	Sep 07 00:39:33 pause-294956 kubelet[3329]: E0907 00:39:33.960134    3329 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.77:8443: connect: connection refused" node="pause-294956"
	Sep 07 00:39:35 pause-294956 kubelet[3329]: I0907 00:39:35.562125    3329 kubelet_node_status.go:70] "Attempting to register node" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.098707    3329 kubelet_node_status.go:108] "Node was previously registered" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.098896    3329 kubelet_node_status.go:73] "Successfully registered node" node="pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.102189    3329 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.103404    3329 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.418307    3329 apiserver.go:52] "Watching apiserver"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: E0907 00:39:39.421012    3329 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-294956\" already exists" pod="kube-system/kube-controller-manager-pause-294956"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.422812    3329 topology_manager.go:215] "Topology Admit Handler" podUID="f65c3cb4-c02a-42b1-abad-251a71700f77" podNamespace="kube-system" podName="coredns-5dd5756b68-j58nv"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.424144    3329 topology_manager.go:215] "Topology Admit Handler" podUID="dec135a8-27ad-43d9-93c4-3e4eabc42c38" podNamespace="kube-system" podName="kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.427004    3329 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.440411    3329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dec135a8-27ad-43d9-93c4-3e4eabc42c38-lib-modules\") pod \"kube-proxy-j29lr\" (UID: \"dec135a8-27ad-43d9-93c4-3e4eabc42c38\") " pod="kube-system/kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.440843    3329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dec135a8-27ad-43d9-93c4-3e4eabc42c38-xtables-lock\") pod \"kube-proxy-j29lr\" (UID: \"dec135a8-27ad-43d9-93c4-3e4eabc42c38\") " pod="kube-system/kube-proxy-j29lr"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.725523    3329 scope.go:117] "RemoveContainer" containerID="3583867756c5f0724db9a2d182d6074838911f6713b61a0b19ac3fd46feceaf3"
	Sep 07 00:39:39 pause-294956 kubelet[3329]: I0907 00:39:39.726135    3329 scope.go:117] "RemoveContainer" containerID="3cff45c1dfebeb60e30c994dce5dcf40f2e6df2a1a106a7c1905bc81aa38e3c3"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:39:58.035150   40003 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17174-6470/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-294956 -n pause-294956
helpers_test.go:261: (dbg) Run:  kubectl --context pause-294956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (87.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (269.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.853639363.exe start -p stopped-upgrade-690155 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.853639363.exe start -p stopped-upgrade-690155 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.038431982s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.853639363.exe -p stopped-upgrade-690155 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.853639363.exe -p stopped-upgrade-690155 stop: (1m33.10586323s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-690155 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-690155 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (39.66163972s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-690155] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-690155 in cluster stopped-upgrade-690155
	* Restarting existing kvm2 VM for "stopped-upgrade-690155" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:44:03.841888   45364 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:44:03.842030   45364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:03.842040   45364 out.go:309] Setting ErrFile to fd 2...
	I0907 00:44:03.842045   45364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:03.842248   45364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:44:03.842766   45364 out.go:303] Setting JSON to false
	I0907 00:44:03.843775   45364 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5188,"bootTime":1694042256,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:44:03.843825   45364 start.go:138] virtualization: kvm guest
	I0907 00:44:03.846167   45364 out.go:177] * [stopped-upgrade-690155] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:44:03.847932   45364 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:44:03.849169   45364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:44:03.847989   45364 notify.go:220] Checking for updates...
	I0907 00:44:03.851830   45364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:44:03.853254   45364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:44:03.854627   45364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:44:03.856000   45364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:44:03.858319   45364 config.go:182] Loaded profile config "stopped-upgrade-690155": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0907 00:44:03.858354   45364 start_flags.go:686] config upgrade: Driver=kvm2
	I0907 00:44:03.858367   45364 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0907 00:44:03.858539   45364 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/stopped-upgrade-690155/config.json ...
	I0907 00:44:03.859850   45364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:03.859885   45364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:03.873924   45364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0907 00:44:03.874398   45364 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:03.875073   45364 main.go:141] libmachine: Using API Version  1
	I0907 00:44:03.875108   45364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:03.875427   45364 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:03.875643   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:03.877706   45364 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0907 00:44:03.879062   45364 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:44:03.879353   45364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:03.879389   45364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:03.896389   45364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0907 00:44:03.896758   45364 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:03.897193   45364 main.go:141] libmachine: Using API Version  1
	I0907 00:44:03.897208   45364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:03.897473   45364 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:03.897691   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:03.933842   45364 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:44:03.935154   45364 start.go:298] selected driver: kvm2
	I0907 00:44:03.935177   45364 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-690155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 AutoPauseInterval:0s}
	I0907 00:44:03.935278   45364 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:44:03.936217   45364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:03.936302   45364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:44:03.951432   45364 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:44:03.951780   45364 cni.go:84] Creating CNI manager for ""
	I0907 00:44:03.951797   45364 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0907 00:44:03.951805   45364 start_flags.go:321] config:
	{Name:stopped-upgrade-690155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0907 00:44:03.951950   45364 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:03.954454   45364 out.go:177] * Starting control plane node stopped-upgrade-690155 in cluster stopped-upgrade-690155
	I0907 00:44:03.955848   45364 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0907 00:44:04.071844   45364 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0907 00:44:04.072022   45364 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/stopped-upgrade-690155/config.json ...
	I0907 00:44:04.072144   45364 cache.go:107] acquiring lock: {Name:mk777a2c6c8af6f8c4f579806b6f1802d6d0d780 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072200   45364 cache.go:107] acquiring lock: {Name:mk8a0d25472c2300613db21a0ebf2c980b39f32a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072213   45364 cache.go:107] acquiring lock: {Name:mk09936f8ca333a4f4eed016557aac6597ad6ba7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072166   45364 cache.go:107] acquiring lock: {Name:mk26f05d7c4624705d894605a55d55faf900f80e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072246   45364 cache.go:107] acquiring lock: {Name:mk3dee7e5f6eceeab2f3e6acc96e2842cd7cabe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072273   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0907 00:44:04.072261   45364 cache.go:107] acquiring lock: {Name:mkb1c77274cfa9b3493e4a1fd02e6a2650efe360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072290   45364 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 159.827µs
	I0907 00:44:04.072320   45364 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0907 00:44:04.072320   45364 start.go:365] acquiring machines lock for stopped-upgrade-690155: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:44:04.072322   45364 cache.go:107] acquiring lock: {Name:mkd60f16278a3e2c71e588d7ee3a4c6470160b75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072364   45364 cache.go:107] acquiring lock: {Name:mk771fb3339fe97a4385bee215495cef98959127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:44:04.072411   45364 start.go:369] acquired machines lock for "stopped-upgrade-690155" in 70.652µs
	I0907 00:44:04.072439   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0907 00:44:04.072447   45364 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:44:04.072449   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0907 00:44:04.072460   45364 fix.go:54] fixHost starting: minikube
	I0907 00:44:04.072450   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0907 00:44:04.072467   45364 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 303.951µs
	I0907 00:44:04.072478   45364 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0907 00:44:04.072480   45364 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 248.07µs
	I0907 00:44:04.072498   45364 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0907 00:44:04.072262   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0907 00:44:04.072507   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0907 00:44:04.072516   45364 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 327.672µs
	I0907 00:44:04.072530   45364 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0907 00:44:04.072449   45364 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 204.138µs
	I0907 00:44:04.072537   45364 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0907 00:44:04.072497   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0907 00:44:04.072548   45364 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 266.812µs
	I0907 00:44:04.072560   45364 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0907 00:44:04.072521   45364 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 230.045µs
	I0907 00:44:04.072568   45364 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0907 00:44:04.072453   45364 cache.go:115] /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0907 00:44:04.072578   45364 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 385.119µs
	I0907 00:44:04.072586   45364 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0907 00:44:04.072593   45364 cache.go:87] Successfully saved all images to host disk.
	I0907 00:44:04.072955   45364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:04.072989   45364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:04.087358   45364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46511
	I0907 00:44:04.087764   45364 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:04.088230   45364 main.go:141] libmachine: Using API Version  1
	I0907 00:44:04.088249   45364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:04.088532   45364 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:04.088690   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:04.088813   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetState
	I0907 00:44:04.090532   45364 fix.go:102] recreateIfNeeded on stopped-upgrade-690155: state=Stopped err=<nil>
	I0907 00:44:04.090554   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	W0907 00:44:04.090757   45364 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:44:04.093145   45364 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-690155" ...
	I0907 00:44:04.094646   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .Start
	I0907 00:44:04.094859   45364 main.go:141] libmachine: (stopped-upgrade-690155) Ensuring networks are active...
	I0907 00:44:04.095671   45364 main.go:141] libmachine: (stopped-upgrade-690155) Ensuring network default is active
	I0907 00:44:04.096052   45364 main.go:141] libmachine: (stopped-upgrade-690155) Ensuring network minikube-net is active
	I0907 00:44:04.096522   45364 main.go:141] libmachine: (stopped-upgrade-690155) Getting domain xml...
	I0907 00:44:04.097380   45364 main.go:141] libmachine: (stopped-upgrade-690155) Creating domain...
	I0907 00:44:05.336027   45364 main.go:141] libmachine: (stopped-upgrade-690155) Waiting to get IP...
	I0907 00:44:05.336791   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:05.337174   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:05.337266   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:05.337181   45399 retry.go:31] will retry after 219.122735ms: waiting for machine to come up
	I0907 00:44:05.557535   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:05.558036   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:05.558068   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:05.558017   45399 retry.go:31] will retry after 319.640505ms: waiting for machine to come up
	I0907 00:44:05.879434   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:05.879807   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:05.879835   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:05.879758   45399 retry.go:31] will retry after 299.498799ms: waiting for machine to come up
	I0907 00:44:06.181216   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:06.181755   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:06.181780   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:06.181699   45399 retry.go:31] will retry after 445.952687ms: waiting for machine to come up
	I0907 00:44:06.629531   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:06.629991   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:06.630021   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:06.629943   45399 retry.go:31] will retry after 758.495073ms: waiting for machine to come up
	I0907 00:44:07.389864   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:07.390475   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:07.390504   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:07.390415   45399 retry.go:31] will retry after 599.681415ms: waiting for machine to come up
	I0907 00:44:07.992265   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:07.992843   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:07.992868   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:07.992789   45399 retry.go:31] will retry after 939.879869ms: waiting for machine to come up
	I0907 00:44:08.934060   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:08.934461   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:08.934492   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:08.934408   45399 retry.go:31] will retry after 1.414135182s: waiting for machine to come up
	I0907 00:44:10.350520   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:10.350953   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:10.350980   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:10.350853   45399 retry.go:31] will retry after 1.572889128s: waiting for machine to come up
	I0907 00:44:11.925635   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:11.926139   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:11.926170   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:11.926075   45399 retry.go:31] will retry after 2.1389663s: waiting for machine to come up
	I0907 00:44:14.067277   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:14.067821   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:14.067854   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:14.067747   45399 retry.go:31] will retry after 2.665028024s: waiting for machine to come up
	I0907 00:44:16.736250   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:16.736698   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:16.736729   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:16.736622   45399 retry.go:31] will retry after 2.285484352s: waiting for machine to come up
	I0907 00:44:19.025111   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:19.025554   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:19.025585   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:19.025491   45399 retry.go:31] will retry after 3.568698117s: waiting for machine to come up
	I0907 00:44:22.595879   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:22.596358   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:22.596386   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:22.596320   45399 retry.go:31] will retry after 4.448250502s: waiting for machine to come up
	I0907 00:44:27.048000   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:27.048645   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | unable to find current IP address of domain stopped-upgrade-690155 in network minikube-net
	I0907 00:44:27.048677   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | I0907 00:44:27.048577   45399 retry.go:31] will retry after 7.056348165s: waiting for machine to come up
	I0907 00:44:34.106623   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.107065   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has current primary IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.107083   45364 main.go:141] libmachine: (stopped-upgrade-690155) Found IP for machine: 192.168.39.5
	I0907 00:44:34.107103   45364 main.go:141] libmachine: (stopped-upgrade-690155) Reserving static IP address...
	I0907 00:44:34.107505   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "stopped-upgrade-690155", mac: "52:54:00:62:05:d4", ip: "192.168.39.5"} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.107527   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-690155", mac: "52:54:00:62:05:d4", ip: "192.168.39.5"}
	I0907 00:44:34.107544   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | Getting to WaitForSSH function...
	I0907 00:44:34.107563   45364 main.go:141] libmachine: (stopped-upgrade-690155) Reserved static IP address: 192.168.39.5
	I0907 00:44:34.107578   45364 main.go:141] libmachine: (stopped-upgrade-690155) Waiting for SSH to be available...
	I0907 00:44:34.109659   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.109929   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.109968   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.110078   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | Using SSH client type: external
	I0907 00:44:34.110110   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa (-rw-------)
	I0907 00:44:34.110166   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:44:34.110196   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | About to run SSH command:
	I0907 00:44:34.110214   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | exit 0
	I0907 00:44:34.238191   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | SSH cmd err, output: <nil>: 
	I0907 00:44:34.238570   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetConfigRaw
	I0907 00:44:34.239197   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetIP
	I0907 00:44:34.241634   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.242035   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.242072   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.242276   45364 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/stopped-upgrade-690155/config.json ...
	I0907 00:44:34.242488   45364 machine.go:88] provisioning docker machine ...
	I0907 00:44:34.242512   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:34.242747   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetMachineName
	I0907 00:44:34.242941   45364 buildroot.go:166] provisioning hostname "stopped-upgrade-690155"
	I0907 00:44:34.242960   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetMachineName
	I0907 00:44:34.243098   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:34.245305   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.245645   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.245671   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.245799   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:34.245975   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:34.246110   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:34.246236   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:34.246376   45364 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:34.246898   45364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0907 00:44:34.246918   45364 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-690155 && echo "stopped-upgrade-690155" | sudo tee /etc/hostname
	I0907 00:44:34.369063   45364 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-690155
	
	I0907 00:44:34.369098   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:34.371732   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.372172   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.372224   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.372314   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:34.372546   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:34.372706   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:34.372858   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:34.373016   45364 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:34.373450   45364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0907 00:44:34.373469   45364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-690155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-690155/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-690155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:44:34.495249   45364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:44:34.495279   45364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:44:34.495301   45364 buildroot.go:174] setting up certificates
	I0907 00:44:34.495313   45364 provision.go:83] configureAuth start
	I0907 00:44:34.495328   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetMachineName
	I0907 00:44:34.495601   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetIP
	I0907 00:44:34.498067   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.498373   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.498398   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.498549   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:34.500934   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.501288   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.501318   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.501471   45364 provision.go:138] copyHostCerts
	I0907 00:44:34.501511   45364 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:44:34.501519   45364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:44:34.501582   45364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:44:34.501711   45364 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:44:34.501723   45364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:44:34.501750   45364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:44:34.501857   45364 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:44:34.501866   45364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:44:34.501886   45364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:44:34.501938   45364 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-690155 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube stopped-upgrade-690155]
	I0907 00:44:34.911459   45364 provision.go:172] copyRemoteCerts
	I0907 00:44:34.911524   45364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:44:34.911546   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:34.914291   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.914627   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:34.914657   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:34.914868   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:34.915079   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:34.915231   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:34.915371   45364 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa Username:docker}
	I0907 00:44:35.000840   45364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:44:35.014667   45364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:44:35.027540   45364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:44:35.040384   45364 provision.go:86] duration metric: configureAuth took 545.058334ms
	I0907 00:44:35.040405   45364 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:44:35.040598   45364 config.go:182] Loaded profile config "stopped-upgrade-690155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0907 00:44:35.040687   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:35.043081   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:35.043447   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:35.043489   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:35.043648   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:35.043872   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:35.044034   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:35.044205   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:35.044370   45364 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:35.044781   45364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0907 00:44:35.044799   45364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:44:42.570097   45364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:44:42.570125   45364 machine.go:91] provisioned docker machine in 8.327620406s
	I0907 00:44:42.570135   45364 start.go:300] post-start starting for "stopped-upgrade-690155" (driver="kvm2")
	I0907 00:44:42.570145   45364 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:44:42.570167   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:42.570509   45364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:44:42.570536   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:42.573723   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.574088   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:42.574122   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.574270   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:42.574459   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:42.574622   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:42.574752   45364 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa Username:docker}
	I0907 00:44:42.657695   45364 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:44:42.661780   45364 info.go:137] Remote host: Buildroot 2019.02.7
	I0907 00:44:42.661801   45364 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:44:42.661852   45364 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:44:42.661918   45364 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:44:42.661999   45364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:44:42.667696   45364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:44:42.680565   45364 start.go:303] post-start completed in 110.417487ms
	I0907 00:44:42.680588   45364 fix.go:56] fixHost completed within 38.608131047s
	I0907 00:44:42.680609   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:42.683260   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.683555   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:42.683595   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.683725   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:42.683929   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:42.684097   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:42.684325   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:42.684499   45364 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:42.684879   45364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0907 00:44:42.684891   45364 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0907 00:44:42.803835   45364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047482.745932869
	
	I0907 00:44:42.803863   45364 fix.go:206] guest clock: 1694047482.745932869
	I0907 00:44:42.803873   45364 fix.go:219] Guest: 2023-09-07 00:44:42.745932869 +0000 UTC Remote: 2023-09-07 00:44:42.680591415 +0000 UTC m=+38.869477110 (delta=65.341454ms)
	I0907 00:44:42.803894   45364 fix.go:190] guest clock delta is within tolerance: 65.341454ms
	I0907 00:44:42.803899   45364 start.go:83] releasing machines lock for "stopped-upgrade-690155", held for 38.731471057s
	I0907 00:44:42.803919   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:42.804189   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetIP
	I0907 00:44:42.806542   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.806842   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:42.806889   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.807000   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:42.807632   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:42.807824   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .DriverName
	I0907 00:44:42.807912   45364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:44:42.807967   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:42.808089   45364 ssh_runner.go:195] Run: cat /version.json
	I0907 00:44:42.808114   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHHostname
	I0907 00:44:42.810959   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.811149   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.811348   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:42.811395   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.811522   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:42.811670   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:05:d4", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-09-07 01:44:28 +0000 UTC Type:0 Mac:52:54:00:62:05:d4 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:stopped-upgrade-690155 Clientid:01:52:54:00:62:05:d4}
	I0907 00:44:42.811702   45364 main.go:141] libmachine: (stopped-upgrade-690155) DBG | domain stopped-upgrade-690155 has defined IP address 192.168.39.5 and MAC address 52:54:00:62:05:d4 in network minikube-net
	I0907 00:44:42.811743   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:42.811849   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHPort
	I0907 00:44:42.811911   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:42.811966   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHKeyPath
	I0907 00:44:42.812028   45364 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa Username:docker}
	I0907 00:44:42.812087   45364 main.go:141] libmachine: (stopped-upgrade-690155) Calling .GetSSHUsername
	I0907 00:44:42.812215   45364 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/stopped-upgrade-690155/id_rsa Username:docker}
	W0907 00:44:42.914469   45364 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0907 00:44:42.914532   45364 ssh_runner.go:195] Run: systemctl --version
	I0907 00:44:42.919189   45364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:44:43.100496   45364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:44:43.106447   45364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:44:43.106520   45364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:44:43.111629   45364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:44:43.111652   45364 start.go:466] detecting cgroup driver to use...
	I0907 00:44:43.111704   45364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:44:43.121191   45364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:44:43.129024   45364 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:44:43.129076   45364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:44:43.136214   45364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:44:43.143478   45364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0907 00:44:43.150687   45364 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0907 00:44:43.150728   45364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:44:43.240641   45364 docker.go:212] disabling docker service ...
	I0907 00:44:43.240694   45364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:44:43.252711   45364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:44:43.260581   45364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:44:43.346156   45364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:44:43.433619   45364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:44:43.442190   45364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:44:43.453295   45364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:44:43.453365   45364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:43.461692   45364 out.go:177] 
	W0907 00:44:43.463157   45364 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0907 00:44:43.463178   45364 out.go:239] * 
	* 
	W0907 00:44:43.464012   45364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:44:43.465704   45364 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-690155 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (269.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-940806 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-940806 --alsologtostderr -v=3: exit status 82 (2m1.773611369s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-940806"  ...
	* Stopping node "old-k8s-version-940806"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:43:20.543604   45040 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:43:20.543789   45040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:43:20.543822   45040 out.go:309] Setting ErrFile to fd 2...
	I0907 00:43:20.543837   45040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:43:20.544048   45040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:43:20.544300   45040 out.go:303] Setting JSON to false
	I0907 00:43:20.544475   45040 mustload.go:65] Loading cluster: old-k8s-version-940806
	I0907 00:43:20.544910   45040 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:43:20.545021   45040 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:43:20.545216   45040 mustload.go:65] Loading cluster: old-k8s-version-940806
	I0907 00:43:20.545348   45040 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:43:20.545383   45040 stop.go:39] StopHost: old-k8s-version-940806
	I0907 00:43:20.545768   45040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:43:20.545831   45040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:43:20.560752   45040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35023
	I0907 00:43:20.561342   45040 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:43:20.562078   45040 main.go:141] libmachine: Using API Version  1
	I0907 00:43:20.562100   45040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:43:20.562482   45040 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:43:20.566095   45040 out.go:177] * Stopping node "old-k8s-version-940806"  ...
	I0907 00:43:20.569033   45040 main.go:141] libmachine: Stopping "old-k8s-version-940806"...
	I0907 00:43:20.569048   45040 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:43:20.570993   45040 main.go:141] libmachine: (old-k8s-version-940806) Calling .Stop
	I0907 00:43:20.579198   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 0/60
	I0907 00:43:21.581764   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 1/60
	I0907 00:43:22.583213   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 2/60
	I0907 00:43:23.584929   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 3/60
	I0907 00:43:24.587465   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 4/60
	I0907 00:43:25.589364   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 5/60
	I0907 00:43:26.590989   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 6/60
	I0907 00:43:27.593484   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 7/60
	I0907 00:43:28.595208   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 8/60
	I0907 00:43:29.597458   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 9/60
	I0907 00:43:30.599927   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 10/60
	I0907 00:43:31.602159   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 11/60
	I0907 00:43:32.603612   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 12/60
	I0907 00:43:33.605364   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 13/60
	I0907 00:43:34.607005   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 14/60
	I0907 00:43:35.609271   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 15/60
	I0907 00:43:36.610931   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 16/60
	I0907 00:43:37.613649   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 17/60
	I0907 00:43:38.615086   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 18/60
	I0907 00:43:39.617252   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 19/60
	I0907 00:43:40.619786   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 20/60
	I0907 00:43:41.621312   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 21/60
	I0907 00:43:42.623812   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 22/60
	I0907 00:43:43.625891   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 23/60
	I0907 00:43:44.627431   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 24/60
	I0907 00:43:45.629634   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 25/60
	I0907 00:43:46.631033   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 26/60
	I0907 00:43:47.633430   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 27/60
	I0907 00:43:48.635444   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 28/60
	I0907 00:43:49.637363   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 29/60
	I0907 00:43:50.639566   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 30/60
	I0907 00:43:51.641688   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 31/60
	I0907 00:43:52.645199   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 32/60
	I0907 00:43:53.646862   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 33/60
	I0907 00:43:54.648213   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 34/60
	I0907 00:43:55.649758   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 35/60
	I0907 00:43:56.651535   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 36/60
	I0907 00:43:57.653071   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 37/60
	I0907 00:43:58.654372   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 38/60
	I0907 00:43:59.655831   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 39/60
	I0907 00:44:00.657896   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 40/60
	I0907 00:44:01.659357   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 41/60
	I0907 00:44:02.661124   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 42/60
	I0907 00:44:03.662651   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 43/60
	I0907 00:44:04.664301   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 44/60
	I0907 00:44:05.666318   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 45/60
	I0907 00:44:06.668075   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 46/60
	I0907 00:44:07.669465   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 47/60
	I0907 00:44:08.671268   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 48/60
	I0907 00:44:09.672934   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 49/60
	I0907 00:44:10.675136   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 50/60
	I0907 00:44:11.676611   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 51/60
	I0907 00:44:12.678106   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 52/60
	I0907 00:44:13.679683   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 53/60
	I0907 00:44:14.681747   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 54/60
	I0907 00:44:15.683977   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 55/60
	I0907 00:44:16.685390   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 56/60
	I0907 00:44:17.686876   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 57/60
	I0907 00:44:18.688208   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 58/60
	I0907 00:44:19.690106   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 59/60
	I0907 00:44:20.691149   45040 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:44:20.691203   45040 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:44:20.691217   45040 retry.go:31] will retry after 1.450010975s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:44:22.141913   45040 stop.go:39] StopHost: old-k8s-version-940806
	I0907 00:44:22.142255   45040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:22.142304   45040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:22.156668   45040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0907 00:44:22.157103   45040 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:22.157588   45040 main.go:141] libmachine: Using API Version  1
	I0907 00:44:22.157618   45040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:22.157952   45040 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:22.160047   45040 out.go:177] * Stopping node "old-k8s-version-940806"  ...
	I0907 00:44:22.161266   45040 main.go:141] libmachine: Stopping "old-k8s-version-940806"...
	I0907 00:44:22.161284   45040 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:44:22.162827   45040 main.go:141] libmachine: (old-k8s-version-940806) Calling .Stop
	I0907 00:44:22.166089   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 0/60
	I0907 00:44:23.168565   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 1/60
	I0907 00:44:24.170390   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 2/60
	I0907 00:44:25.172053   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 3/60
	I0907 00:44:26.173606   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 4/60
	I0907 00:44:27.175602   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 5/60
	I0907 00:44:28.177513   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 6/60
	I0907 00:44:29.178710   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 7/60
	I0907 00:44:30.180095   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 8/60
	I0907 00:44:31.181413   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 9/60
	I0907 00:44:32.183268   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 10/60
	I0907 00:44:33.184621   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 11/60
	I0907 00:44:34.185917   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 12/60
	I0907 00:44:35.187397   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 13/60
	I0907 00:44:36.188775   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 14/60
	I0907 00:44:37.190679   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 15/60
	I0907 00:44:38.191933   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 16/60
	I0907 00:44:39.193303   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 17/60
	I0907 00:44:40.194612   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 18/60
	I0907 00:44:41.196068   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 19/60
	I0907 00:44:42.197875   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 20/60
	I0907 00:44:43.199245   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 21/60
	I0907 00:44:44.201579   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 22/60
	I0907 00:44:45.203091   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 23/60
	I0907 00:44:46.204681   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 24/60
	I0907 00:44:47.206546   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 25/60
	I0907 00:44:48.208007   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 26/60
	I0907 00:44:49.209329   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 27/60
	I0907 00:44:50.210792   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 28/60
	I0907 00:44:51.212264   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 29/60
	I0907 00:44:52.214534   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 30/60
	I0907 00:44:53.215826   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 31/60
	I0907 00:44:54.217295   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 32/60
	I0907 00:44:55.218943   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 33/60
	I0907 00:44:56.220469   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 34/60
	I0907 00:44:57.222137   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 35/60
	I0907 00:44:58.223819   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 36/60
	I0907 00:44:59.225302   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 37/60
	I0907 00:45:00.226508   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 38/60
	I0907 00:45:01.228172   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 39/60
	I0907 00:45:02.229896   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 40/60
	I0907 00:45:03.231119   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 41/60
	I0907 00:45:04.232515   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 42/60
	I0907 00:45:05.233940   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 43/60
	I0907 00:45:06.235442   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 44/60
	I0907 00:45:07.237328   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 45/60
	I0907 00:45:08.238684   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 46/60
	I0907 00:45:09.240503   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 47/60
	I0907 00:45:10.241927   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 48/60
	I0907 00:45:11.243668   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 49/60
	I0907 00:45:12.245821   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 50/60
	I0907 00:45:13.247351   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 51/60
	I0907 00:45:14.249338   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 52/60
	I0907 00:45:15.250663   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 53/60
	I0907 00:45:16.252240   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 54/60
	I0907 00:45:17.253986   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 55/60
	I0907 00:45:18.255343   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 56/60
	I0907 00:45:19.256741   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 57/60
	I0907 00:45:20.258279   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 58/60
	I0907 00:45:21.259726   45040 main.go:141] libmachine: (old-k8s-version-940806) Waiting for machine to stop 59/60
	I0907 00:45:22.260615   45040 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:45:22.260650   45040 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:45:22.262608   45040 out.go:177] 
	W0907 00:45:22.264654   45040 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0907 00:45:22.264673   45040 out.go:239] * 
	* 
	W0907 00:45:22.266948   45040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:45:22.268598   45040 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-940806 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806: exit status 3 (18.536696202s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:45:40.807154   46120 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host
	E0907 00:45:40.807174   46120 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-940806" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-321164 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-321164 --alsologtostderr -v=3: exit status 82 (2m2.176371154s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-321164"  ...
	* Stopping node "no-preload-321164"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:44:02.237308   45331 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:44:02.237443   45331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:02.237454   45331 out.go:309] Setting ErrFile to fd 2...
	I0907 00:44:02.237461   45331 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:02.237683   45331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:44:02.237938   45331 out.go:303] Setting JSON to false
	I0907 00:44:02.238022   45331 mustload.go:65] Loading cluster: no-preload-321164
	I0907 00:44:02.238383   45331 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:44:02.238482   45331 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:44:02.238657   45331 mustload.go:65] Loading cluster: no-preload-321164
	I0907 00:44:02.238850   45331 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:44:02.238887   45331 stop.go:39] StopHost: no-preload-321164
	I0907 00:44:02.239323   45331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:02.239377   45331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:02.255465   45331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0907 00:44:02.255961   45331 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:02.256610   45331 main.go:141] libmachine: Using API Version  1
	I0907 00:44:02.256641   45331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:02.256980   45331 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:02.259160   45331 out.go:177] * Stopping node "no-preload-321164"  ...
	I0907 00:44:02.260612   45331 main.go:141] libmachine: Stopping "no-preload-321164"...
	I0907 00:44:02.260637   45331 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:44:02.262642   45331 main.go:141] libmachine: (no-preload-321164) Calling .Stop
	I0907 00:44:02.266678   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 0/60
	I0907 00:44:03.268470   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 1/60
	I0907 00:44:04.269948   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 2/60
	I0907 00:44:05.271429   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 3/60
	I0907 00:44:06.273375   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 4/60
	I0907 00:44:07.275512   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 5/60
	I0907 00:44:08.277222   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 6/60
	I0907 00:44:09.278954   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 7/60
	I0907 00:44:10.281326   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 8/60
	I0907 00:44:11.282620   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 9/60
	I0907 00:44:12.283909   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 10/60
	I0907 00:44:13.285823   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 11/60
	I0907 00:44:14.287257   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 12/60
	I0907 00:44:15.288470   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 13/60
	I0907 00:44:16.289907   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 14/60
	I0907 00:44:17.292183   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 15/60
	I0907 00:44:18.293709   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 16/60
	I0907 00:44:19.295184   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 17/60
	I0907 00:44:20.297121   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 18/60
	I0907 00:44:21.298620   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 19/60
	I0907 00:44:22.299928   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 20/60
	I0907 00:44:23.301617   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 21/60
	I0907 00:44:24.302993   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 22/60
	I0907 00:44:25.304402   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 23/60
	I0907 00:44:26.305904   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 24/60
	I0907 00:44:27.307798   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 25/60
	I0907 00:44:28.309312   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 26/60
	I0907 00:44:29.310684   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 27/60
	I0907 00:44:30.312075   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 28/60
	I0907 00:44:31.313364   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 29/60
	I0907 00:44:32.315649   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 30/60
	I0907 00:44:33.317090   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 31/60
	I0907 00:44:34.318400   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 32/60
	I0907 00:44:35.319914   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 33/60
	I0907 00:44:36.321527   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 34/60
	I0907 00:44:37.323649   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 35/60
	I0907 00:44:38.325063   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 36/60
	I0907 00:44:39.326411   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 37/60
	I0907 00:44:40.327761   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 38/60
	I0907 00:44:41.329171   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 39/60
	I0907 00:44:42.331458   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 40/60
	I0907 00:44:43.332776   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 41/60
	I0907 00:44:44.848117   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 42/60
	I0907 00:44:45.849645   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 43/60
	I0907 00:44:46.851294   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 44/60
	I0907 00:44:47.853233   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 45/60
	I0907 00:44:48.854579   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 46/60
	I0907 00:44:49.856367   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 47/60
	I0907 00:44:50.857936   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 48/60
	I0907 00:44:51.859421   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 49/60
	I0907 00:44:52.861668   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 50/60
	I0907 00:44:53.863370   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 51/60
	I0907 00:44:54.864856   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 52/60
	I0907 00:44:55.866242   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 53/60
	I0907 00:44:56.867823   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 54/60
	I0907 00:44:57.869942   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 55/60
	I0907 00:44:58.871384   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 56/60
	I0907 00:44:59.872846   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 57/60
	I0907 00:45:00.874364   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 58/60
	I0907 00:45:01.875963   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 59/60
	I0907 00:45:02.877352   45331 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:45:02.877421   45331 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:45:02.877436   45331 retry.go:31] will retry after 1.364371347s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:45:04.242928   45331 stop.go:39] StopHost: no-preload-321164
	I0907 00:45:04.243250   45331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:45:04.243305   45331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:45:04.257327   45331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0907 00:45:04.257772   45331 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:45:04.258332   45331 main.go:141] libmachine: Using API Version  1
	I0907 00:45:04.258357   45331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:45:04.258671   45331 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:45:04.260690   45331 out.go:177] * Stopping node "no-preload-321164"  ...
	I0907 00:45:04.262304   45331 main.go:141] libmachine: Stopping "no-preload-321164"...
	I0907 00:45:04.262326   45331 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:45:04.264159   45331 main.go:141] libmachine: (no-preload-321164) Calling .Stop
	I0907 00:45:04.267452   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 0/60
	I0907 00:45:05.269102   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 1/60
	I0907 00:45:06.270254   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 2/60
	I0907 00:45:07.271829   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 3/60
	I0907 00:45:08.273072   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 4/60
	I0907 00:45:09.274733   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 5/60
	I0907 00:45:10.276276   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 6/60
	I0907 00:45:11.277854   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 7/60
	I0907 00:45:12.279281   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 8/60
	I0907 00:45:13.281636   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 9/60
	I0907 00:45:14.284003   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 10/60
	I0907 00:45:15.285144   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 11/60
	I0907 00:45:16.287226   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 12/60
	I0907 00:45:17.289327   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 13/60
	I0907 00:45:18.290583   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 14/60
	I0907 00:45:19.292235   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 15/60
	I0907 00:45:20.294145   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 16/60
	I0907 00:45:21.295409   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 17/60
	I0907 00:45:22.297065   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 18/60
	I0907 00:45:23.298405   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 19/60
	I0907 00:45:24.300194   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 20/60
	I0907 00:45:25.301778   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 21/60
	I0907 00:45:26.303159   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 22/60
	I0907 00:45:27.304579   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 23/60
	I0907 00:45:28.305920   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 24/60
	I0907 00:45:29.308114   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 25/60
	I0907 00:45:30.309788   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 26/60
	I0907 00:45:31.311418   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 27/60
	I0907 00:45:32.312758   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 28/60
	I0907 00:45:33.314263   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 29/60
	I0907 00:45:34.316148   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 30/60
	I0907 00:45:35.317692   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 31/60
	I0907 00:45:36.319045   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 32/60
	I0907 00:45:37.321367   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 33/60
	I0907 00:45:38.322705   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 34/60
	I0907 00:45:39.324972   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 35/60
	I0907 00:45:40.326495   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 36/60
	I0907 00:45:41.328359   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 37/60
	I0907 00:45:42.329753   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 38/60
	I0907 00:45:43.331260   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 39/60
	I0907 00:45:44.333059   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 40/60
	I0907 00:45:45.334291   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 41/60
	I0907 00:45:46.335559   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 42/60
	I0907 00:45:47.336729   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 43/60
	I0907 00:45:48.337999   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 44/60
	I0907 00:45:49.339805   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 45/60
	I0907 00:45:50.341062   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 46/60
	I0907 00:45:51.342371   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 47/60
	I0907 00:45:52.343659   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 48/60
	I0907 00:45:53.344961   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 49/60
	I0907 00:45:54.346519   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 50/60
	I0907 00:45:55.347757   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 51/60
	I0907 00:45:56.349237   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 52/60
	I0907 00:45:57.350849   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 53/60
	I0907 00:45:58.352817   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 54/60
	I0907 00:45:59.354466   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 55/60
	I0907 00:46:00.355798   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 56/60
	I0907 00:46:01.357242   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 57/60
	I0907 00:46:02.358589   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 58/60
	I0907 00:46:03.359825   45331 main.go:141] libmachine: (no-preload-321164) Waiting for machine to stop 59/60
	I0907 00:46:04.360814   45331 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:46:04.360852   45331 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:46:04.362871   45331 out.go:177] 
	W0907 00:46:04.364267   45331 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0907 00:46:04.364282   45331 out.go:239] * 
	* 
	W0907 00:46:04.366631   45331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:46:04.367939   45331 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-321164 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
E0907 00:46:07.894671   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164: exit status 3 (18.420226857s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:22.791119   46483 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host
	E0907 00:46:22.791150   46483 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-321164" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-546209 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-546209 --alsologtostderr -v=3: exit status 82 (2m0.889719622s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-546209"  ...
	* Stopping node "embed-certs-546209"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:44:07.940604   45515 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:44:07.940741   45515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:07.940751   45515 out.go:309] Setting ErrFile to fd 2...
	I0907 00:44:07.940755   45515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:44:07.940962   45515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:44:07.941221   45515 out.go:303] Setting JSON to false
	I0907 00:44:07.941313   45515 mustload.go:65] Loading cluster: embed-certs-546209
	I0907 00:44:07.941750   45515 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:44:07.941868   45515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:44:07.942075   45515 mustload.go:65] Loading cluster: embed-certs-546209
	I0907 00:44:07.942219   45515 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:44:07.942256   45515 stop.go:39] StopHost: embed-certs-546209
	I0907 00:44:07.942732   45515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:44:07.942815   45515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:07.956932   45515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0907 00:44:07.957422   45515 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:07.958021   45515 main.go:141] libmachine: Using API Version  1
	I0907 00:44:07.958046   45515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:07.958371   45515 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:07.960887   45515 out.go:177] * Stopping node "embed-certs-546209"  ...
	I0907 00:44:07.962337   45515 main.go:141] libmachine: Stopping "embed-certs-546209"...
	I0907 00:44:07.962355   45515 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:44:07.963909   45515 main.go:141] libmachine: (embed-certs-546209) Calling .Stop
	I0907 00:44:07.967521   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 0/60
	I0907 00:44:08.969390   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 1/60
	I0907 00:44:09.970894   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 2/60
	I0907 00:44:10.972291   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 3/60
	I0907 00:44:11.973851   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 4/60
	I0907 00:44:12.975628   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 5/60
	I0907 00:44:13.977049   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 6/60
	I0907 00:44:14.978313   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 7/60
	I0907 00:44:15.979995   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 8/60
	I0907 00:44:16.981496   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 9/60
	I0907 00:44:17.983196   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 10/60
	I0907 00:44:18.984750   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 11/60
	I0907 00:44:19.986055   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 12/60
	I0907 00:44:20.987712   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 13/60
	I0907 00:44:21.988883   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 14/60
	I0907 00:44:22.990998   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 15/60
	I0907 00:44:23.992472   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 16/60
	I0907 00:44:24.993990   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 17/60
	I0907 00:44:25.996311   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 18/60
	I0907 00:44:26.997843   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 19/60
	I0907 00:44:28.000218   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 20/60
	I0907 00:44:29.001624   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 21/60
	I0907 00:44:30.003120   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 22/60
	I0907 00:44:31.004337   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 23/60
	I0907 00:44:32.005908   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 24/60
	I0907 00:44:33.007910   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 25/60
	I0907 00:44:34.009145   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 26/60
	I0907 00:44:35.010766   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 27/60
	I0907 00:44:36.012185   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 28/60
	I0907 00:44:37.013464   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 29/60
	I0907 00:44:38.015667   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 30/60
	I0907 00:44:39.017124   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 31/60
	I0907 00:44:40.018515   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 32/60
	I0907 00:44:41.020052   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 33/60
	I0907 00:44:42.021550   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 34/60
	I0907 00:44:43.023632   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 35/60
	I0907 00:44:44.025046   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 36/60
	I0907 00:44:45.026810   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 37/60
	I0907 00:44:46.028338   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 38/60
	I0907 00:44:47.029756   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 39/60
	I0907 00:44:48.031841   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 40/60
	I0907 00:44:49.033371   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 41/60
	I0907 00:44:50.035004   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 42/60
	I0907 00:44:51.037629   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 43/60
	I0907 00:44:52.039044   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 44/60
	I0907 00:44:53.041181   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 45/60
	I0907 00:44:54.042620   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 46/60
	I0907 00:44:55.044231   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 47/60
	I0907 00:44:56.045795   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 48/60
	I0907 00:44:57.047585   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 49/60
	I0907 00:44:58.049781   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 50/60
	I0907 00:44:59.051563   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 51/60
	I0907 00:45:00.052977   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 52/60
	I0907 00:45:01.054574   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 53/60
	I0907 00:45:02.056220   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 54/60
	I0907 00:45:03.058065   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 55/60
	I0907 00:45:04.059636   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 56/60
	I0907 00:45:05.061394   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 57/60
	I0907 00:45:06.062852   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 58/60
	I0907 00:45:07.065091   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 59/60
	I0907 00:45:08.066411   45515 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:45:08.066469   45515 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:45:08.066484   45515 retry.go:31] will retry after 594.296121ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:45:08.661199   45515 stop.go:39] StopHost: embed-certs-546209
	I0907 00:45:08.661531   45515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:45:08.661571   45515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:45:08.675898   45515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0907 00:45:08.676295   45515 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:45:08.676843   45515 main.go:141] libmachine: Using API Version  1
	I0907 00:45:08.676862   45515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:45:08.677156   45515 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:45:08.679322   45515 out.go:177] * Stopping node "embed-certs-546209"  ...
	I0907 00:45:08.680765   45515 main.go:141] libmachine: Stopping "embed-certs-546209"...
	I0907 00:45:08.680778   45515 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:45:08.682300   45515 main.go:141] libmachine: (embed-certs-546209) Calling .Stop
	I0907 00:45:08.685423   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 0/60
	I0907 00:45:09.687306   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 1/60
	I0907 00:45:10.688949   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 2/60
	I0907 00:45:11.690525   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 3/60
	I0907 00:45:12.692002   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 4/60
	I0907 00:45:13.693569   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 5/60
	I0907 00:45:14.695269   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 6/60
	I0907 00:45:15.696928   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 7/60
	I0907 00:45:16.698465   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 8/60
	I0907 00:45:17.699882   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 9/60
	I0907 00:45:18.702050   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 10/60
	I0907 00:45:19.703575   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 11/60
	I0907 00:45:20.705001   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 12/60
	I0907 00:45:21.706658   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 13/60
	I0907 00:45:22.708256   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 14/60
	I0907 00:45:23.710094   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 15/60
	I0907 00:45:24.711443   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 16/60
	I0907 00:45:25.713541   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 17/60
	I0907 00:45:26.714917   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 18/60
	I0907 00:45:27.716460   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 19/60
	I0907 00:45:28.718052   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 20/60
	I0907 00:45:29.719820   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 21/60
	I0907 00:45:30.721337   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 22/60
	I0907 00:45:31.722723   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 23/60
	I0907 00:45:32.724434   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 24/60
	I0907 00:45:33.725915   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 25/60
	I0907 00:45:34.727182   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 26/60
	I0907 00:45:35.729264   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 27/60
	I0907 00:45:36.730588   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 28/60
	I0907 00:45:37.731928   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 29/60
	I0907 00:45:38.733412   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 30/60
	I0907 00:45:39.735641   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 31/60
	I0907 00:45:40.736870   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 32/60
	I0907 00:45:41.738285   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 33/60
	I0907 00:45:42.739619   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 34/60
	I0907 00:45:43.741753   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 35/60
	I0907 00:45:44.743263   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 36/60
	I0907 00:45:45.745191   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 37/60
	I0907 00:45:46.746483   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 38/60
	I0907 00:45:47.748056   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 39/60
	I0907 00:45:48.749583   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 40/60
	I0907 00:45:49.751043   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 41/60
	I0907 00:45:50.753284   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 42/60
	I0907 00:45:51.754418   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 43/60
	I0907 00:45:52.755778   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 44/60
	I0907 00:45:53.757399   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 45/60
	I0907 00:45:54.758728   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 46/60
	I0907 00:45:55.760490   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 47/60
	I0907 00:45:56.761865   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 48/60
	I0907 00:45:57.763700   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 49/60
	I0907 00:45:58.765353   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 50/60
	I0907 00:45:59.766674   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 51/60
	I0907 00:46:00.768192   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 52/60
	I0907 00:46:01.770726   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 53/60
	I0907 00:46:02.772121   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 54/60
	I0907 00:46:03.773807   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 55/60
	I0907 00:46:04.775193   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 56/60
	I0907 00:46:05.776536   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 57/60
	I0907 00:46:06.777966   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 58/60
	I0907 00:46:07.779253   45515 main.go:141] libmachine: (embed-certs-546209) Waiting for machine to stop 59/60
	I0907 00:46:08.780229   45515 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:46:08.780270   45515 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:46:08.782319   45515 out.go:177] 
	W0907 00:46:08.783743   45515 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0907 00:46:08.783761   45515 out.go:239] * 
	* 
	W0907 00:46:08.786329   45515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:46:08.787686   45515 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-546209 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
E0907 00:46:17.593248   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209: exit status 3 (18.609641357s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:27.399096   46524 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host
	E0907 00:46:27.399115   46524 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-546209" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806: exit status 3 (3.167665365s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:45:43.975145   46195 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host
	E0907 00:45:43.975167   46195 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-940806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-940806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153097231s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-940806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806: exit status 3 (3.06278767s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:45:53.191118   46324 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host
	E0907 00:45:53.191138   46324 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-940806" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-773466 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-773466 --alsologtostderr -v=3: exit status 82 (2m1.25723084s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-773466"  ...
	* Stopping node "default-k8s-diff-port-773466"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:45:58.569068   46454 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:45:58.569207   46454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:45:58.569217   46454 out.go:309] Setting ErrFile to fd 2...
	I0907 00:45:58.569225   46454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:45:58.569444   46454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:45:58.569697   46454 out.go:303] Setting JSON to false
	I0907 00:45:58.569792   46454 mustload.go:65] Loading cluster: default-k8s-diff-port-773466
	I0907 00:45:58.570142   46454 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:45:58.570228   46454 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:45:58.570408   46454 mustload.go:65] Loading cluster: default-k8s-diff-port-773466
	I0907 00:45:58.570537   46454 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:45:58.570574   46454 stop.go:39] StopHost: default-k8s-diff-port-773466
	I0907 00:45:58.571011   46454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:45:58.571073   46454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:45:58.585112   46454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0907 00:45:58.585568   46454 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:45:58.586127   46454 main.go:141] libmachine: Using API Version  1
	I0907 00:45:58.586151   46454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:45:58.586455   46454 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:45:58.588916   46454 out.go:177] * Stopping node "default-k8s-diff-port-773466"  ...
	I0907 00:45:58.590124   46454 main.go:141] libmachine: Stopping "default-k8s-diff-port-773466"...
	I0907 00:45:58.590142   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:45:58.591777   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Stop
	I0907 00:45:58.595178   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 0/60
	I0907 00:45:59.596742   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 1/60
	I0907 00:46:00.598129   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 2/60
	I0907 00:46:01.599804   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 3/60
	I0907 00:46:02.601129   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 4/60
	I0907 00:46:03.603368   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 5/60
	I0907 00:46:04.605453   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 6/60
	I0907 00:46:05.606727   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 7/60
	I0907 00:46:06.608381   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 8/60
	I0907 00:46:07.609745   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 9/60
	I0907 00:46:08.611108   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 10/60
	I0907 00:46:09.612496   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 11/60
	I0907 00:46:10.613922   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 12/60
	I0907 00:46:11.615336   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 13/60
	I0907 00:46:12.616641   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 14/60
	I0907 00:46:13.618682   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 15/60
	I0907 00:46:14.620113   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 16/60
	I0907 00:46:15.621453   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 17/60
	I0907 00:46:16.622758   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 18/60
	I0907 00:46:17.624185   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 19/60
	I0907 00:46:18.625962   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 20/60
	I0907 00:46:19.627362   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 21/60
	I0907 00:46:20.628711   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 22/60
	I0907 00:46:21.630077   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 23/60
	I0907 00:46:22.631644   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 24/60
	I0907 00:46:23.633591   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 25/60
	I0907 00:46:24.635093   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 26/60
	I0907 00:46:25.636502   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 27/60
	I0907 00:46:26.637965   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 28/60
	I0907 00:46:27.639292   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 29/60
	I0907 00:46:28.640584   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 30/60
	I0907 00:46:29.642022   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 31/60
	I0907 00:46:30.643294   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 32/60
	I0907 00:46:31.644660   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 33/60
	I0907 00:46:32.645975   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 34/60
	I0907 00:46:33.647752   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 35/60
	I0907 00:46:34.649156   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 36/60
	I0907 00:46:35.650712   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 37/60
	I0907 00:46:36.652425   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 38/60
	I0907 00:46:37.653941   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 39/60
	I0907 00:46:38.656204   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 40/60
	I0907 00:46:39.657526   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 41/60
	I0907 00:46:40.658929   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 42/60
	I0907 00:46:41.660144   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 43/60
	I0907 00:46:42.661497   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 44/60
	I0907 00:46:43.663696   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 45/60
	I0907 00:46:44.665270   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 46/60
	I0907 00:46:45.666739   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 47/60
	I0907 00:46:46.668548   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 48/60
	I0907 00:46:47.669919   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 49/60
	I0907 00:46:48.672083   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 50/60
	I0907 00:46:49.673291   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 51/60
	I0907 00:46:50.674718   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 52/60
	I0907 00:46:51.675993   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 53/60
	I0907 00:46:52.677432   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 54/60
	I0907 00:46:53.679604   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 55/60
	I0907 00:46:54.681022   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 56/60
	I0907 00:46:55.682475   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 57/60
	I0907 00:46:56.683803   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 58/60
	I0907 00:46:57.685216   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 59/60
	I0907 00:46:58.685725   46454 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:46:58.685803   46454 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:46:58.685825   46454 retry.go:31] will retry after 977.384347ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:46:59.663982   46454 stop.go:39] StopHost: default-k8s-diff-port-773466
	I0907 00:46:59.664326   46454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:46:59.664366   46454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:46:59.678470   46454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0907 00:46:59.678854   46454 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:46:59.679355   46454 main.go:141] libmachine: Using API Version  1
	I0907 00:46:59.679378   46454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:46:59.679665   46454 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:46:59.681604   46454 out.go:177] * Stopping node "default-k8s-diff-port-773466"  ...
	I0907 00:46:59.682899   46454 main.go:141] libmachine: Stopping "default-k8s-diff-port-773466"...
	I0907 00:46:59.682912   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:46:59.684427   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Stop
	I0907 00:46:59.688083   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 0/60
	I0907 00:47:00.689430   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 1/60
	I0907 00:47:01.691125   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 2/60
	I0907 00:47:02.692620   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 3/60
	I0907 00:47:03.694068   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 4/60
	I0907 00:47:04.696272   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 5/60
	I0907 00:47:05.697623   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 6/60
	I0907 00:47:06.699231   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 7/60
	I0907 00:47:07.700746   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 8/60
	I0907 00:47:08.702241   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 9/60
	I0907 00:47:09.704293   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 10/60
	I0907 00:47:10.705646   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 11/60
	I0907 00:47:11.707318   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 12/60
	I0907 00:47:12.708674   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 13/60
	I0907 00:47:13.710024   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 14/60
	I0907 00:47:14.711816   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 15/60
	I0907 00:47:15.713306   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 16/60
	I0907 00:47:16.714915   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 17/60
	I0907 00:47:17.716464   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 18/60
	I0907 00:47:18.717882   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 19/60
	I0907 00:47:19.719758   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 20/60
	I0907 00:47:20.721199   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 21/60
	I0907 00:47:21.722705   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 22/60
	I0907 00:47:22.724456   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 23/60
	I0907 00:47:23.725771   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 24/60
	I0907 00:47:24.727487   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 25/60
	I0907 00:47:25.728783   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 26/60
	I0907 00:47:26.730070   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 27/60
	I0907 00:47:27.731434   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 28/60
	I0907 00:47:28.732713   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 29/60
	I0907 00:47:29.735065   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 30/60
	I0907 00:47:30.736540   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 31/60
	I0907 00:47:31.737892   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 32/60
	I0907 00:47:32.739159   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 33/60
	I0907 00:47:33.740310   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 34/60
	I0907 00:47:34.742179   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 35/60
	I0907 00:47:35.743442   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 36/60
	I0907 00:47:36.744721   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 37/60
	I0907 00:47:37.746097   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 38/60
	I0907 00:47:38.747383   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 39/60
	I0907 00:47:39.749166   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 40/60
	I0907 00:47:40.750535   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 41/60
	I0907 00:47:41.751697   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 42/60
	I0907 00:47:42.753020   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 43/60
	I0907 00:47:43.754369   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 44/60
	I0907 00:47:44.756050   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 45/60
	I0907 00:47:45.757506   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 46/60
	I0907 00:47:46.758804   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 47/60
	I0907 00:47:47.760596   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 48/60
	I0907 00:47:48.761710   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 49/60
	I0907 00:47:49.763490   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 50/60
	I0907 00:47:50.765263   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 51/60
	I0907 00:47:51.766614   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 52/60
	I0907 00:47:52.767942   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 53/60
	I0907 00:47:53.769152   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 54/60
	I0907 00:47:54.771148   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 55/60
	I0907 00:47:55.772402   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 56/60
	I0907 00:47:56.773959   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 57/60
	I0907 00:47:57.775413   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 58/60
	I0907 00:47:58.776938   46454 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for machine to stop 59/60
	I0907 00:47:59.778177   46454 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0907 00:47:59.778212   46454 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0907 00:47:59.780606   46454 out.go:177] 
	W0907 00:47:59.782716   46454 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0907 00:47:59.782734   46454 out.go:239] * 
	* 
	W0907 00:47:59.785168   46454 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 00:47:59.786610   46454 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-773466 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466: exit status 3 (18.458902132s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:48:18.247127   47104 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0907 00:48:18.247150   47104 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-773466" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
E0907 00:46:24.846643   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164: exit status 3 (3.167615914s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:25.959132   46607 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host
	E0907 00:46:25.959153   46607 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-321164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-321164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153735226s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-321164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164: exit status 3 (3.062109589s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:35.175182   46728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host
	E0907 00:46:35.175210   46728 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-321164" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209: exit status 3 (3.168009127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:30.567133   46666 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host
	E0907 00:46:30.567159   46666 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-546209 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-546209 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152857266s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-546209 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209: exit status 3 (3.063030302s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:46:39.783168   46803 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host
	E0907 00:46:39.783197   46803 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.242:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-546209" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466: exit status 3 (3.167348799s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:48:21.415173   47181 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0907 00:48:21.415197   47181 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-773466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-773466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153885486s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-773466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466: exit status 3 (3.061701496s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 00:48:30.631151   47251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0907 00:48:30.631169   47251 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-773466" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0907 00:56:17.593573   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546209 -n embed-certs-546209
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:05:06.192286653 +0000 UTC m=+5244.913742193
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-546209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-546209 logs -n 25: (1.614456377s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-386196                              | cert-expiration-386196       | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-940806        | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC | 07 Sep 23 00:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:48:30.668905   47297 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:48:30.669040   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669051   47297 out.go:309] Setting ErrFile to fd 2...
	I0907 00:48:30.669055   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669275   47297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:48:30.669849   47297 out.go:303] Setting JSON to false
	I0907 00:48:30.670802   47297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1694042256,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:48:30.670876   47297 start.go:138] virtualization: kvm guest
	I0907 00:48:30.673226   47297 out.go:177] * [default-k8s-diff-port-773466] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:48:30.675018   47297 notify.go:220] Checking for updates...
	I0907 00:48:30.675022   47297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:48:30.676573   47297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:48:30.677899   47297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:48:30.679390   47297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:48:30.680678   47297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:48:30.682324   47297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:48:30.684199   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:48:30.684737   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.684791   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.699093   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0907 00:48:30.699446   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.699961   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.699981   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.700356   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.700531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.700779   47297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:48:30.701065   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.701099   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.715031   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0907 00:48:30.715374   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.715847   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.715866   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.716151   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.716316   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.750129   47297 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:48:30.751568   47297 start.go:298] selected driver: kvm2
	I0907 00:48:30.751584   47297 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.751680   47297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:48:30.752362   47297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.752458   47297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:48:30.765932   47297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:48:30.766254   47297 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:48:30.766285   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:48:30.766297   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:48:30.766312   47297 start_flags.go:321] config:
	{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-77346
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.766449   47297 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.768165   47297 out.go:177] * Starting control plane node default-k8s-diff-port-773466 in cluster default-k8s-diff-port-773466
	I0907 00:48:28.807066   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:30.769579   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:48:30.769605   47297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:48:30.769618   47297 cache.go:57] Caching tarball of preloaded images
	I0907 00:48:30.769690   47297 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:48:30.769700   47297 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:48:30.769802   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:48:30.769965   47297 start.go:365] acquiring machines lock for default-k8s-diff-port-773466: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:48:34.886988   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:37.959093   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:44.039083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:47.111100   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:53.191104   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:56.263090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:02.343026   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:05.415059   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:11.495064   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:14.567091   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:20.647045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:23.719041   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:29.799012   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:32.871070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:38.951073   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:42.023127   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:48.103090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:51.175063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:57.255062   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:00.327063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:06.407045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:09.479083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:15.559056   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:18.631050   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:24.711070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:27.783032   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:30.786864   46768 start.go:369] acquired machines lock for "no-preload-321164" in 3m55.470116528s
	I0907 00:50:30.786911   46768 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:30.786932   46768 fix.go:54] fixHost starting: 
	I0907 00:50:30.787365   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:30.787402   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:30.802096   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0907 00:50:30.802471   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:30.803040   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:50:30.803070   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:30.803390   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:30.803609   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:30.803735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:50:30.805366   46768 fix.go:102] recreateIfNeeded on no-preload-321164: state=Stopped err=<nil>
	I0907 00:50:30.805394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	W0907 00:50:30.805601   46768 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:30.807478   46768 out.go:177] * Restarting existing kvm2 VM for "no-preload-321164" ...
	I0907 00:50:30.784621   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:30.784665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:50:30.786659   46354 machine.go:91] provisioned docker machine in 4m37.428246924s
	I0907 00:50:30.786707   46354 fix.go:56] fixHost completed within 4m37.448613342s
	I0907 00:50:30.786715   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 4m37.448629588s
	W0907 00:50:30.786743   46354 start.go:672] error starting host: provision: host is not running
	W0907 00:50:30.786862   46354 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:50:30.786876   46354 start.go:687] Will try again in 5 seconds ...
	I0907 00:50:30.809015   46768 main.go:141] libmachine: (no-preload-321164) Calling .Start
	I0907 00:50:30.809182   46768 main.go:141] libmachine: (no-preload-321164) Ensuring networks are active...
	I0907 00:50:30.809827   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network default is active
	I0907 00:50:30.810153   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network mk-no-preload-321164 is active
	I0907 00:50:30.810520   46768 main.go:141] libmachine: (no-preload-321164) Getting domain xml...
	I0907 00:50:30.811434   46768 main.go:141] libmachine: (no-preload-321164) Creating domain...
	I0907 00:50:32.024103   46768 main.go:141] libmachine: (no-preload-321164) Waiting to get IP...
	I0907 00:50:32.024955   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.025314   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.025386   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.025302   47622 retry.go:31] will retry after 211.413529ms: waiting for machine to come up
	I0907 00:50:32.238887   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.239424   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.239452   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.239400   47622 retry.go:31] will retry after 306.62834ms: waiting for machine to come up
	I0907 00:50:32.547910   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.548378   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.548409   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.548318   47622 retry.go:31] will retry after 360.126343ms: waiting for machine to come up
	I0907 00:50:32.909809   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.910325   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.910356   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.910259   47622 retry.go:31] will retry after 609.953186ms: waiting for machine to come up
	I0907 00:50:33.522073   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:33.522437   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:33.522467   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:33.522382   47622 retry.go:31] will retry after 526.4152ms: waiting for machine to come up
	I0907 00:50:34.050028   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.050475   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.050503   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.050417   47622 retry.go:31] will retry after 748.311946ms: waiting for machine to come up
	I0907 00:50:34.799933   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.800367   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.800395   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.800321   47622 retry.go:31] will retry after 732.484316ms: waiting for machine to come up
	I0907 00:50:35.788945   46354 start.go:365] acquiring machines lock for old-k8s-version-940806: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:50:35.534154   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:35.534583   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:35.534606   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:35.534535   47622 retry.go:31] will retry after 1.217693919s: waiting for machine to come up
	I0907 00:50:36.754260   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:36.754682   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:36.754711   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:36.754634   47622 retry.go:31] will retry after 1.508287783s: waiting for machine to come up
	I0907 00:50:38.264195   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:38.264607   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:38.264630   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:38.264557   47622 retry.go:31] will retry after 1.481448978s: waiting for machine to come up
	I0907 00:50:39.748383   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:39.748865   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:39.748898   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:39.748803   47622 retry.go:31] will retry after 2.345045055s: waiting for machine to come up
	I0907 00:50:42.095158   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:42.095801   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:42.095832   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:42.095747   47622 retry.go:31] will retry after 3.269083195s: waiting for machine to come up
	I0907 00:50:45.369097   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:45.369534   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:45.369561   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:45.369448   47622 retry.go:31] will retry after 4.462134893s: waiting for machine to come up
	I0907 00:50:49.835862   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836273   46768 main.go:141] libmachine: (no-preload-321164) Found IP for machine: 192.168.61.125
	I0907 00:50:49.836315   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has current primary IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836342   46768 main.go:141] libmachine: (no-preload-321164) Reserving static IP address...
	I0907 00:50:49.836774   46768 main.go:141] libmachine: (no-preload-321164) Reserved static IP address: 192.168.61.125
	I0907 00:50:49.836794   46768 main.go:141] libmachine: (no-preload-321164) Waiting for SSH to be available...
	I0907 00:50:49.836827   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.836860   46768 main.go:141] libmachine: (no-preload-321164) DBG | skip adding static IP to network mk-no-preload-321164 - found existing host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"}
	I0907 00:50:49.836880   46768 main.go:141] libmachine: (no-preload-321164) DBG | Getting to WaitForSSH function...
	I0907 00:50:49.838931   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839299   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.839326   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839464   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH client type: external
	I0907 00:50:49.839500   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa (-rw-------)
	I0907 00:50:49.839538   46768 main.go:141] libmachine: (no-preload-321164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:50:49.839557   46768 main.go:141] libmachine: (no-preload-321164) DBG | About to run SSH command:
	I0907 00:50:49.839568   46768 main.go:141] libmachine: (no-preload-321164) DBG | exit 0
	I0907 00:50:49.930557   46768 main.go:141] libmachine: (no-preload-321164) DBG | SSH cmd err, output: <nil>: 
	I0907 00:50:49.931033   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetConfigRaw
	I0907 00:50:49.931662   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:49.934286   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934719   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.934755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934973   46768 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:50:49.935197   46768 machine.go:88] provisioning docker machine ...
	I0907 00:50:49.935221   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:49.935409   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935567   46768 buildroot.go:166] provisioning hostname "no-preload-321164"
	I0907 00:50:49.935586   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935730   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:49.937619   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.937879   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.937899   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.938049   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:49.938303   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938464   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938624   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:49.938803   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:49.939300   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:49.939315   46768 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-321164 && echo "no-preload-321164" | sudo tee /etc/hostname
	I0907 00:50:50.076488   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-321164
	
	I0907 00:50:50.076513   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.079041   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079362   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.079409   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079614   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.079831   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080013   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080183   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.080361   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.080757   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.080775   46768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-321164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-321164/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-321164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:50:51.203755   46833 start.go:369] acquired machines lock for "embed-certs-546209" in 4m11.274622402s
	I0907 00:50:51.203804   46833 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:51.203823   46833 fix.go:54] fixHost starting: 
	I0907 00:50:51.204233   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:51.204274   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:51.221096   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0907 00:50:51.221487   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:51.222026   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:50:51.222048   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:51.222401   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:51.222595   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:50:51.222757   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:50:51.224388   46833 fix.go:102] recreateIfNeeded on embed-certs-546209: state=Stopped err=<nil>
	I0907 00:50:51.224413   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	W0907 00:50:51.224585   46833 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:51.226812   46833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-546209" ...
	I0907 00:50:50.214796   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:50.215590   46768 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:50:50.215629   46768 buildroot.go:174] setting up certificates
	I0907 00:50:50.215639   46768 provision.go:83] configureAuth start
	I0907 00:50:50.215659   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:50.215952   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:50.218581   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.218947   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.218970   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.219137   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.221833   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222177   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.222221   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222323   46768 provision.go:138] copyHostCerts
	I0907 00:50:50.222377   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:50:50.222390   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:50:50.222497   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:50:50.222628   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:50:50.222646   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:50:50.222682   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:50:50.222765   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:50:50.222784   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:50:50.222817   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:50:50.222880   46768 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.no-preload-321164 san=[192.168.61.125 192.168.61.125 localhost 127.0.0.1 minikube no-preload-321164]
	I0907 00:50:50.456122   46768 provision.go:172] copyRemoteCerts
	I0907 00:50:50.456175   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:50:50.456198   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.458665   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459030   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.459053   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459237   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.459468   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.459630   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.459766   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:50.549146   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:50:50.572002   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 00:50:50.595576   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:50:50.618054   46768 provision.go:86] duration metric: configureAuth took 402.401011ms
	I0907 00:50:50.618086   46768 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:50:50.618327   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:50:50.618410   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.620908   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621255   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.621289   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621432   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.621619   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621752   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621879   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.622006   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.622586   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.622611   46768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:50:50.946938   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:50:50.946964   46768 machine.go:91] provisioned docker machine in 1.011750962s
	I0907 00:50:50.946975   46768 start.go:300] post-start starting for "no-preload-321164" (driver="kvm2")
	I0907 00:50:50.946989   46768 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:50:50.947015   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:50.947339   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:50:50.947367   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.950370   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950754   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.950798   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.951171   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.951331   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.951472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.040440   46768 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:50:51.044700   46768 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:50:51.044728   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:50:51.044816   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:50:51.044899   46768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:50:51.045018   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:50:51.053507   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:50:51.077125   46768 start.go:303] post-start completed in 130.134337ms
	I0907 00:50:51.077149   46768 fix.go:56] fixHost completed within 20.29021748s
	I0907 00:50:51.077174   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.079928   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080266   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.080297   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080516   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.080744   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.080909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.081080   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.081255   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:51.081837   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:51.081853   46768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:50:51.203596   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047851.182131777
	
	I0907 00:50:51.203636   46768 fix.go:206] guest clock: 1694047851.182131777
	I0907 00:50:51.203646   46768 fix.go:219] Guest: 2023-09-07 00:50:51.182131777 +0000 UTC Remote: 2023-09-07 00:50:51.077154021 +0000 UTC m=+255.896364351 (delta=104.977756ms)
	I0907 00:50:51.203664   46768 fix.go:190] guest clock delta is within tolerance: 104.977756ms
	I0907 00:50:51.203668   46768 start.go:83] releasing machines lock for "no-preload-321164", held for 20.416782491s
	I0907 00:50:51.203696   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.203977   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:51.207262   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207708   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.207755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207926   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208563   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208644   46768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:50:51.208692   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.208755   46768 ssh_runner.go:195] Run: cat /version.json
	I0907 00:50:51.208777   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.211412   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211453   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211863   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211901   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211931   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211957   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.212132   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212318   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212406   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212477   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212612   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.212722   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212875   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.300796   46768 ssh_runner.go:195] Run: systemctl --version
	I0907 00:50:51.324903   46768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:50:51.465767   46768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:50:51.471951   46768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:50:51.472036   46768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:50:51.488733   46768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:50:51.488761   46768 start.go:466] detecting cgroup driver to use...
	I0907 00:50:51.488831   46768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:50:51.501772   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:50:51.516019   46768 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:50:51.516083   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:50:51.530425   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:50:51.546243   46768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:50:51.649058   46768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:50:51.768622   46768 docker.go:212] disabling docker service ...
	I0907 00:50:51.768705   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:50:51.785225   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:50:51.797018   46768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:50:51.908179   46768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:50:52.021212   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:50:52.037034   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:50:52.055163   46768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:50:52.055218   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.065451   46768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:50:52.065520   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.076202   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.086865   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.096978   46768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:50:52.107492   46768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:50:52.117036   46768 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:50:52.117104   46768 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:50:52.130309   46768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:50:52.140016   46768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:50:52.249901   46768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:50:52.422851   46768 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:50:52.422928   46768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:50:52.427852   46768 start.go:534] Will wait 60s for crictl version
	I0907 00:50:52.427903   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.431904   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:50:52.472552   46768 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:50:52.472632   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.526514   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.580133   46768 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:50:51.228316   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Start
	I0907 00:50:51.228549   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring networks are active...
	I0907 00:50:51.229311   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network default is active
	I0907 00:50:51.229587   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network mk-embed-certs-546209 is active
	I0907 00:50:51.230001   46833 main.go:141] libmachine: (embed-certs-546209) Getting domain xml...
	I0907 00:50:51.230861   46833 main.go:141] libmachine: (embed-certs-546209) Creating domain...
	I0907 00:50:52.512329   46833 main.go:141] libmachine: (embed-certs-546209) Waiting to get IP...
	I0907 00:50:52.513160   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.513607   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.513709   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.513575   47738 retry.go:31] will retry after 266.575501ms: waiting for machine to come up
	I0907 00:50:52.782236   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.782674   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.782699   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.782623   47738 retry.go:31] will retry after 258.252832ms: waiting for machine to come up
	I0907 00:50:53.042276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.042851   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.042886   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.042799   47738 retry.go:31] will retry after 480.751908ms: waiting for machine to come up
	I0907 00:50:53.525651   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.526280   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.526314   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.526222   47738 retry.go:31] will retry after 592.373194ms: waiting for machine to come up
	I0907 00:50:54.119935   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.120401   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.120440   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.120320   47738 retry.go:31] will retry after 602.269782ms: waiting for machine to come up
	I0907 00:50:54.723919   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.724403   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.724429   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.724356   47738 retry.go:31] will retry after 631.28427ms: waiting for machine to come up
	I0907 00:50:52.581522   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:52.584587   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.584995   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:52.585027   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.585212   46768 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:50:52.589138   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:50:52.602205   46768 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:50:52.602259   46768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:50:52.633785   46768 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:50:52.633808   46768 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:50:52.633868   46768 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.633887   46768 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.633889   46768 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.633929   46768 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0907 00:50:52.633954   46768 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.633849   46768 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.633937   46768 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.634076   46768 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635447   46768 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.635516   46768 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.635529   46768 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.635578   46768 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.635583   46768 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0907 00:50:52.635587   46768 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.868791   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917664   46768 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0907 00:50:52.917705   46768 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917740   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.921520   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.924174   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.924775   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0907 00:50:52.926455   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.927265   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.936511   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.936550   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.989863   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0907 00:50:52.989967   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.081783   46768 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0907 00:50:53.081828   46768 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.081876   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.200951   46768 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0907 00:50:53.200999   46768 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.201037   46768 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0907 00:50:53.201055   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201074   46768 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.201115   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201120   46768 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0907 00:50:53.201138   46768 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.201163   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201196   46768 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0907 00:50:53.201208   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0907 00:50:53.201220   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201222   46768 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:53.201245   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201254   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201257   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.213879   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.213909   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.214030   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.559290   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.356797   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:55.357248   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:55.357276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:55.357208   47738 retry.go:31] will retry after 957.470134ms: waiting for machine to come up
	I0907 00:50:56.316920   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:56.317410   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:56.317437   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:56.317357   47738 retry.go:31] will retry after 929.647798ms: waiting for machine to come up
	I0907 00:50:57.249114   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:57.249599   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:57.249631   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:57.249548   47738 retry.go:31] will retry after 1.218276188s: waiting for machine to come up
	I0907 00:50:58.470046   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:58.470509   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:58.470539   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:58.470461   47738 retry.go:31] will retry after 2.324175972s: waiting for machine to come up
	I0907 00:50:55.219723   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.018454399s)
	I0907 00:50:55.219753   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0907 00:50:55.219835   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0: (2.018563387s)
	I0907 00:50:55.219874   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0907 00:50:55.219897   46768 ssh_runner.go:235] Completed: which crictl: (2.01861063s)
	I0907 00:50:55.219931   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1: (2.006023749s)
	I0907 00:50:55.219956   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:55.219965   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0907 00:50:55.219974   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:55.220018   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.220026   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1: (2.006085999s)
	I0907 00:50:55.220034   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1: (2.005987599s)
	I0907 00:50:55.220056   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0907 00:50:55.220062   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0907 00:50:55.220065   46768 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.660750078s)
	I0907 00:50:55.220091   46768 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0907 00:50:55.220107   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:50:55.220139   46768 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.220178   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:55.220141   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:50:55.263187   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0907 00:50:55.263256   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0907 00:50:55.263276   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263282   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0907 00:50:55.263291   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:50:55.263321   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263334   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0907 00:50:55.263428   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0907 00:50:55.263432   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.275710   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0907 00:50:58.251089   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.987744073s)
	I0907 00:50:58.251119   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0907 00:50:58.251125   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.987662447s)
	I0907 00:50:58.251143   46768 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251164   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0907 00:50:58.251192   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251253   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:50:58.256733   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0907 00:51:00.798145   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:00.798673   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:00.798702   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:00.798607   47738 retry.go:31] will retry after 1.874271621s: waiting for machine to come up
	I0907 00:51:02.674532   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:02.675085   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:02.675117   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:02.675050   47738 retry.go:31] will retry after 2.9595889s: waiting for machine to come up
	I0907 00:51:04.952628   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.701410779s)
	I0907 00:51:04.952741   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0907 00:51:04.952801   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:04.952854   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:05.636309   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:05.636744   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:05.636779   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:05.636694   47738 retry.go:31] will retry after 4.45645523s: waiting for machine to come up
	I0907 00:51:06.100759   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.147880303s)
	I0907 00:51:06.100786   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0907 00:51:06.100803   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:06.100844   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:08.663694   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.56282168s)
	I0907 00:51:08.663725   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0907 00:51:08.663754   46768 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:08.663803   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:10.023202   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.359374479s)
	I0907 00:51:10.023234   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0907 00:51:10.023276   46768 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:10.023349   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:11.739345   47297 start.go:369] acquired machines lock for "default-k8s-diff-port-773466" in 2m40.969329009s
	I0907 00:51:11.739394   47297 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:11.739419   47297 fix.go:54] fixHost starting: 
	I0907 00:51:11.739834   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:11.739870   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:11.755796   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0907 00:51:11.756102   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:11.756564   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:51:11.756588   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:11.756875   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:11.757032   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:11.757185   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:51:11.758750   47297 fix.go:102] recreateIfNeeded on default-k8s-diff-port-773466: state=Stopped err=<nil>
	I0907 00:51:11.758772   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	W0907 00:51:11.758955   47297 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:11.761066   47297 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-773466" ...
	I0907 00:51:10.095825   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096285   46833 main.go:141] libmachine: (embed-certs-546209) Found IP for machine: 192.168.50.242
	I0907 00:51:10.096312   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has current primary IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096321   46833 main.go:141] libmachine: (embed-certs-546209) Reserving static IP address...
	I0907 00:51:10.096706   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.096731   46833 main.go:141] libmachine: (embed-certs-546209) Reserved static IP address: 192.168.50.242
	I0907 00:51:10.096750   46833 main.go:141] libmachine: (embed-certs-546209) DBG | skip adding static IP to network mk-embed-certs-546209 - found existing host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"}
	I0907 00:51:10.096766   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Getting to WaitForSSH function...
	I0907 00:51:10.096777   46833 main.go:141] libmachine: (embed-certs-546209) Waiting for SSH to be available...
	I0907 00:51:10.098896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099227   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.099260   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099360   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH client type: external
	I0907 00:51:10.099382   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa (-rw-------)
	I0907 00:51:10.099412   46833 main.go:141] libmachine: (embed-certs-546209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:10.099428   46833 main.go:141] libmachine: (embed-certs-546209) DBG | About to run SSH command:
	I0907 00:51:10.099444   46833 main.go:141] libmachine: (embed-certs-546209) DBG | exit 0
	I0907 00:51:10.199038   46833 main.go:141] libmachine: (embed-certs-546209) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:10.199377   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetConfigRaw
	I0907 00:51:10.200006   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.202924   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203328   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.203352   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203576   46833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:51:10.203879   46833 machine.go:88] provisioning docker machine ...
	I0907 00:51:10.203908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:10.204125   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204290   46833 buildroot.go:166] provisioning hostname "embed-certs-546209"
	I0907 00:51:10.204312   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204489   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.206898   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207332   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.207365   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207473   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.207643   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207791   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207920   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.208080   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.208476   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.208496   46833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-546209 && echo "embed-certs-546209" | sudo tee /etc/hostname
	I0907 00:51:10.356060   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-546209
	
	I0907 00:51:10.356098   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.359533   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.359867   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.359896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.360097   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.360284   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360435   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360629   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.360820   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.361504   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.361538   46833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-546209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-546209/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-546209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:10.503181   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:10.503211   46833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:10.503238   46833 buildroot.go:174] setting up certificates
	I0907 00:51:10.503246   46833 provision.go:83] configureAuth start
	I0907 00:51:10.503254   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.503555   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.506514   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.506930   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.506955   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.507150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.509772   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510081   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.510111   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510215   46833 provision.go:138] copyHostCerts
	I0907 00:51:10.510281   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:10.510292   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:10.510345   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:10.510438   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:10.510446   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:10.510466   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:10.510552   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:10.510559   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:10.510579   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:10.510638   46833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.embed-certs-546209 san=[192.168.50.242 192.168.50.242 localhost 127.0.0.1 minikube embed-certs-546209]
	I0907 00:51:10.947044   46833 provision.go:172] copyRemoteCerts
	I0907 00:51:10.947101   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:10.947122   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.949879   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950221   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.950251   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.950660   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.950849   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.950993   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.052610   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:11.077082   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0907 00:51:11.100979   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:11.124155   46833 provision.go:86] duration metric: configureAuth took 620.900948ms
	I0907 00:51:11.124176   46833 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:11.124389   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:11.124456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.127163   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127498   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.127536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127813   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.128011   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128201   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128381   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.128560   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.129185   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.129214   46833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:11.467260   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:11.467297   46833 machine.go:91] provisioned docker machine in 1.263400182s
	I0907 00:51:11.467309   46833 start.go:300] post-start starting for "embed-certs-546209" (driver="kvm2")
	I0907 00:51:11.467321   46833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:11.467343   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.467669   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:11.467715   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.470299   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470675   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.470705   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470846   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.471038   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.471191   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.471435   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.568708   46833 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:11.573505   46833 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:11.573533   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:11.573595   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:11.573669   46833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:11.573779   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:11.582612   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.607383   46833 start.go:303] post-start completed in 140.062214ms
	I0907 00:51:11.607400   46833 fix.go:56] fixHost completed within 20.403578781s
	I0907 00:51:11.607419   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.609882   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610233   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.610265   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610411   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.610602   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610792   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610972   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.611161   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.611550   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.611563   46833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:11.739146   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047871.687486971
	
	I0907 00:51:11.739167   46833 fix.go:206] guest clock: 1694047871.687486971
	I0907 00:51:11.739176   46833 fix.go:219] Guest: 2023-09-07 00:51:11.687486971 +0000 UTC Remote: 2023-09-07 00:51:11.607403696 +0000 UTC m=+271.818672785 (delta=80.083275ms)
	I0907 00:51:11.739196   46833 fix.go:190] guest clock delta is within tolerance: 80.083275ms
	I0907 00:51:11.739202   46833 start.go:83] releasing machines lock for "embed-certs-546209", held for 20.535419293s
	I0907 00:51:11.739232   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.739478   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:11.742078   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742446   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.742474   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742676   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743172   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743342   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743422   46833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:11.743470   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.743541   46833 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:11.743573   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.746120   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746484   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.746516   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746640   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.746843   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.746989   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747015   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.747044   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.747169   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.747179   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.747394   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.747556   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747717   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.839831   46833 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:11.861736   46833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:12.006017   46833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:12.011678   46833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:12.011739   46833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:12.026851   46833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:12.026871   46833 start.go:466] detecting cgroup driver to use...
	I0907 00:51:12.026934   46833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:12.040077   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:12.052962   46833 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:12.053018   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:12.066509   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:12.079587   46833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:12.189043   46833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:12.310997   46833 docker.go:212] disabling docker service ...
	I0907 00:51:12.311065   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:12.324734   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:12.336808   46833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:12.461333   46833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:12.584841   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:12.598337   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:12.615660   46833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:12.615736   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.626161   46833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:12.626232   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.637475   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.647631   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.658444   46833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:12.669167   46833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:12.678558   46833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:12.678614   46833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:12.692654   46833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:12.703465   46833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:12.820819   46833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:12.996574   46833 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:12.996650   46833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:13.002744   46833 start.go:534] Will wait 60s for crictl version
	I0907 00:51:13.002818   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:51:13.007287   46833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:13.042173   46833 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:13.042254   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.090562   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.145112   46833 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:13.146767   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:13.149953   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150357   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:13.150388   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150603   46833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:13.154792   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:13.166540   46833 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:13.166607   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:13.203316   46833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:13.203391   46833 ssh_runner.go:195] Run: which lz4
	I0907 00:51:13.207399   46833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:13.211826   46833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:13.211854   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:10.979891   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0907 00:51:10.979935   46768 cache_images.go:123] Successfully loaded all cached images
	I0907 00:51:10.979942   46768 cache_images.go:92] LoadImages completed in 18.346122768s
	I0907 00:51:10.980017   46768 ssh_runner.go:195] Run: crio config
	I0907 00:51:11.044573   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:11.044595   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:11.044612   46768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:11.044630   46768 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-321164 NodeName:no-preload-321164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:11.044749   46768 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-321164"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:11.044807   46768 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-321164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:11.044852   46768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:11.055469   46768 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:11.055527   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:11.063642   46768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0907 00:51:11.081151   46768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:11.098623   46768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0907 00:51:11.116767   46768 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:11.120552   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:11.133845   46768 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164 for IP: 192.168.61.125
	I0907 00:51:11.133876   46768 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:11.134026   46768 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:11.134092   46768 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:11.134173   46768 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.key
	I0907 00:51:11.134216   46768 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key.05d6cdfc
	I0907 00:51:11.134252   46768 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key
	I0907 00:51:11.134393   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:11.134436   46768 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:11.134455   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:11.134488   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:11.134512   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:11.134534   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:11.134576   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.135184   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:11.161212   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:11.185797   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:11.209084   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:11.233001   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:11.255646   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:11.278323   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:11.301913   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:11.324316   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:11.349950   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:11.375738   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:11.402735   46768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:11.421372   46768 ssh_runner.go:195] Run: openssl version
	I0907 00:51:11.426855   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:11.436392   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440778   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.446374   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:11.455773   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:11.465073   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470197   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470243   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.475740   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:11.484993   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:11.494256   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498766   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.504037   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:11.512896   46768 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:11.517289   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:11.523115   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:11.528780   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:11.534330   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:11.539777   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:11.545439   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:11.550878   46768 kubeadm.go:404] StartCluster: {Name:no-preload-321164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:11.550968   46768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:11.551014   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:11.582341   46768 cri.go:89] found id: ""
	I0907 00:51:11.582409   46768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:11.591760   46768 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:11.591782   46768 kubeadm.go:636] restartCluster start
	I0907 00:51:11.591825   46768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:11.600241   46768 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.601258   46768 kubeconfig.go:92] found "no-preload-321164" server: "https://192.168.61.125:8443"
	I0907 00:51:11.603775   46768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:11.612221   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.612268   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.622330   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.622348   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.622392   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.632889   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.133626   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.133726   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.144713   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.633065   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.633145   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.648698   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.133304   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.133401   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.146822   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.633303   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.633374   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.648566   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.132966   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.133041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.147847   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.633090   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.633177   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.648893   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.133388   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.133465   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.149162   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.762623   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Start
	I0907 00:51:11.762823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring networks are active...
	I0907 00:51:11.763580   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network default is active
	I0907 00:51:11.764022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network mk-default-k8s-diff-port-773466 is active
	I0907 00:51:11.764494   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Getting domain xml...
	I0907 00:51:11.765139   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Creating domain...
	I0907 00:51:13.032555   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting to get IP...
	I0907 00:51:13.033441   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.033855   47907 retry.go:31] will retry after 214.721735ms: waiting for machine to come up
	I0907 00:51:13.250549   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251062   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251090   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.251001   47907 retry.go:31] will retry after 260.305773ms: waiting for machine to come up
	I0907 00:51:13.512603   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513144   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513175   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.513088   47907 retry.go:31] will retry after 293.213959ms: waiting for machine to come up
	I0907 00:51:13.807649   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.808128   47907 retry.go:31] will retry after 455.70029ms: waiting for machine to come up
	I0907 00:51:14.265914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266412   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266444   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:14.266367   47907 retry.go:31] will retry after 761.48199ms: waiting for machine to come up
	I0907 00:51:15.029446   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029916   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029950   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.029868   47907 retry.go:31] will retry after 889.947924ms: waiting for machine to come up
	I0907 00:51:15.079606   46833 crio.go:444] Took 1.872243 seconds to copy over tarball
	I0907 00:51:15.079679   46833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:18.068521   46833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988813422s)
	I0907 00:51:18.068547   46833 crio.go:451] Took 2.988919 seconds to extract the tarball
	I0907 00:51:18.068557   46833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:18.109973   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:18.154472   46833 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:18.154493   46833 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:18.154568   46833 ssh_runner.go:195] Run: crio config
	I0907 00:51:18.216517   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:18.216549   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:18.216571   46833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:18.216597   46833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-546209 NodeName:embed-certs-546209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:18.216747   46833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-546209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:18.216815   46833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-546209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:18.216863   46833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:18.230093   46833 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:18.230164   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:18.239087   46833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0907 00:51:18.256683   46833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:18.274030   46833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0907 00:51:18.294711   46833 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:18.299655   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:18.312980   46833 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209 for IP: 192.168.50.242
	I0907 00:51:18.313028   46833 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:18.313215   46833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:18.313283   46833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:18.313382   46833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/client.key
	I0907 00:51:18.313446   46833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key.5dc0f9a1
	I0907 00:51:18.313495   46833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key
	I0907 00:51:18.313607   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:18.313633   46833 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:18.313640   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:18.313665   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:18.313688   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:18.313709   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:18.313747   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:18.314356   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:18.344731   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:18.368872   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:18.397110   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:51:18.424441   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:18.452807   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:18.481018   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:18.509317   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:18.541038   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:18.565984   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:18.590863   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:18.614083   46833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:18.631295   46833 ssh_runner.go:195] Run: openssl version
	I0907 00:51:18.637229   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:18.651999   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.656999   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.657052   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.663109   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:18.675826   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:18.688358   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693281   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693331   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.699223   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:18.711511   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:18.724096   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729285   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729338   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.735410   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:18.747948   46833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:18.753003   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:18.759519   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:18.765813   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:18.772328   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:18.778699   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:18.785207   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:18.791515   46833 kubeadm.go:404] StartCluster: {Name:embed-certs-546209 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:18.791636   46833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:18.791719   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:18.831468   46833 cri.go:89] found id: ""
	I0907 00:51:18.831544   46833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:18.843779   46833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:18.843805   46833 kubeadm.go:636] restartCluster start
	I0907 00:51:18.843863   46833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:18.854604   46833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.855622   46833 kubeconfig.go:92] found "embed-certs-546209" server: "https://192.168.50.242:8443"
	I0907 00:51:18.857679   46833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:18.867583   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.867640   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.879567   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.879587   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.879634   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.891098   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.391839   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.391932   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.405078   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.633045   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.633128   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.644837   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.133842   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.133926   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.148072   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.633750   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.633828   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.648961   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.133669   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.133757   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.148342   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.633967   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.634076   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.649188   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.133815   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.133917   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.148350   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.633962   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.634047   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.649195   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.133733   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.133821   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.145109   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.633727   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.633808   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.645272   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.133921   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.133990   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.145494   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.920914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921395   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921430   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.921325   47907 retry.go:31] will retry after 952.422054ms: waiting for machine to come up
	I0907 00:51:16.875800   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876319   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876356   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:16.876272   47907 retry.go:31] will retry after 1.481584671s: waiting for machine to come up
	I0907 00:51:18.359815   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360308   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:18.360185   47907 retry.go:31] will retry after 1.355619716s: waiting for machine to come up
	I0907 00:51:19.717081   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717458   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717485   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:19.717419   47907 retry.go:31] will retry after 1.450172017s: waiting for machine to come up
	I0907 00:51:19.892019   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.038702   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.051318   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.391913   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.404956   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.891503   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.891594   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.904473   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.391486   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.391563   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.405726   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.891257   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.891337   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.905422   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.392028   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.392137   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.408621   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.891926   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.892033   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.906116   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.391605   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.391684   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.404834   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.891360   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.891447   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.908340   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:24.391916   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.392007   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.408806   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.633099   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.633200   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.644181   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.133144   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.133227   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.144139   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.612786   46768 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:21.612814   46768 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:21.612826   46768 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:21.612881   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:21.643142   46768 cri.go:89] found id: ""
	I0907 00:51:21.643216   46768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:21.658226   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:21.666895   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:21.666960   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675285   46768 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675317   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:21.817664   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.473084   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.670341   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.752820   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.842789   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:22.842868   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:22.861783   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.383385   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.884041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.384065   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.884077   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:21.168650   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169014   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169037   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:21.168966   47907 retry.go:31] will retry after 2.876055316s: waiting for machine to come up
	I0907 00:51:24.046598   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.046990   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.047020   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:24.046937   47907 retry.go:31] will retry after 2.837607521s: waiting for machine to come up
	I0907 00:51:24.891477   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.891564   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.908102   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.391625   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.391704   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.408399   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.892052   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.892166   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.909608   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.391529   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.391610   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.407459   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.891930   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.891994   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.908217   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.391898   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.404370   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.891918   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.892001   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.904988   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.391570   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:28.391650   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:28.403968   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.868619   46833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:28.868666   46833 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:28.868679   46833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:28.868736   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:28.907258   46833 cri.go:89] found id: ""
	I0907 00:51:28.907332   46833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:28.926539   46833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:28.938760   46833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:28.938837   46833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950550   46833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950576   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:29.092484   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:25.383423   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:25.413853   46768 api_server.go:72] duration metric: took 2.571070768s to wait for apiserver process to appear ...
	I0907 00:51:25.413877   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:25.413895   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.168577   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.168617   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.168629   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.228753   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.228785   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.729501   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.735318   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:29.735345   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:26.886341   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886797   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886819   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:26.886742   47907 retry.go:31] will retry after 3.776269501s: waiting for machine to come up
	I0907 00:51:30.665170   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.665736   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Found IP for machine: 192.168.39.96
	I0907 00:51:30.665770   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserving static IP address...
	I0907 00:51:30.665788   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has current primary IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.666183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.666226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | skip adding static IP to network mk-default-k8s-diff-port-773466 - found existing host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"}
	I0907 00:51:30.666245   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserved static IP address: 192.168.39.96
	I0907 00:51:30.666262   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for SSH to be available...
	I0907 00:51:30.666279   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Getting to WaitForSSH function...
	I0907 00:51:30.668591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.229871   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.240735   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:30.240764   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:30.729911   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.736989   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:51:30.746939   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:30.746964   46768 api_server.go:131] duration metric: took 5.333080985s to wait for apiserver health ...
	I0907 00:51:30.746973   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:30.746979   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:30.748709   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:32.716941   46354 start.go:369] acquired machines lock for "old-k8s-version-940806" in 56.927952192s
	I0907 00:51:32.717002   46354 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:32.717014   46354 fix.go:54] fixHost starting: 
	I0907 00:51:32.717431   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:32.717466   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:32.735021   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I0907 00:51:32.735485   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:32.736057   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:51:32.736083   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:32.736457   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:32.736713   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:32.736903   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:51:32.738719   46354 fix.go:102] recreateIfNeeded on old-k8s-version-940806: state=Stopped err=<nil>
	I0907 00:51:32.738743   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	W0907 00:51:32.738924   46354 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:32.740721   46354 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-940806" ...
	I0907 00:51:32.742202   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Start
	I0907 00:51:32.742362   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring networks are active...
	I0907 00:51:32.743087   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network default is active
	I0907 00:51:32.743499   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network mk-old-k8s-version-940806 is active
	I0907 00:51:32.743863   46354 main.go:141] libmachine: (old-k8s-version-940806) Getting domain xml...
	I0907 00:51:32.744603   46354 main.go:141] libmachine: (old-k8s-version-940806) Creating domain...
	I0907 00:51:30.668969   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.670773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.670838   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH client type: external
	I0907 00:51:30.670876   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa (-rw-------)
	I0907 00:51:30.670918   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:30.670934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | About to run SSH command:
	I0907 00:51:30.670947   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | exit 0
	I0907 00:51:30.770939   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:30.771333   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetConfigRaw
	I0907 00:51:30.772100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:30.775128   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775616   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.775654   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775923   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:51:30.776161   47297 machine.go:88] provisioning docker machine ...
	I0907 00:51:30.776180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:30.776399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776597   47297 buildroot.go:166] provisioning hostname "default-k8s-diff-port-773466"
	I0907 00:51:30.776618   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776805   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.779367   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.779761   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.779793   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.780022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.780238   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780534   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.780687   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.781088   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.781102   47297 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-773466 && echo "default-k8s-diff-port-773466" | sudo tee /etc/hostname
	I0907 00:51:30.932287   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-773466
	
	I0907 00:51:30.932320   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.935703   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936111   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.936146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936324   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.936647   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.936851   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.937054   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.937266   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.937890   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.937932   47297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-773466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-773466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-773466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:31.091619   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:31.091654   47297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:31.091707   47297 buildroot.go:174] setting up certificates
	I0907 00:51:31.091724   47297 provision.go:83] configureAuth start
	I0907 00:51:31.091746   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:31.092066   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:31.095183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095670   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.095710   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095861   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.098597   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.098887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.098962   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.099205   47297 provision.go:138] copyHostCerts
	I0907 00:51:31.099275   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:31.099291   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:31.099362   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:31.099516   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:31.099531   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:31.099563   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:31.099658   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:31.099671   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:31.099700   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:31.099807   47297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-773466 san=[192.168.39.96 192.168.39.96 localhost 127.0.0.1 minikube default-k8s-diff-port-773466]
	I0907 00:51:31.793599   47297 provision.go:172] copyRemoteCerts
	I0907 00:51:31.793653   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:31.793676   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.796773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797153   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.797192   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797362   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:31.797578   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:31.797751   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:31.797865   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:31.903781   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:31.935908   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0907 00:51:31.967385   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:51:31.998542   47297 provision.go:86] duration metric: configureAuth took 906.744341ms
	I0907 00:51:31.998576   47297 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:31.998836   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:31.998941   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.002251   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.002747   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002996   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.003300   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003717   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.003996   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.004637   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.004662   47297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:32.413687   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:32.413765   47297 machine.go:91] provisioned docker machine in 1.637590059s
	I0907 00:51:32.413777   47297 start.go:300] post-start starting for "default-k8s-diff-port-773466" (driver="kvm2")
	I0907 00:51:32.413787   47297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:32.413823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.414183   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:32.414227   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.417432   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.417894   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.417954   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.418202   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.418371   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.418517   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.418625   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.523519   47297 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:32.528959   47297 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:32.528983   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:32.529050   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:32.529144   47297 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:32.529249   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:32.538827   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:32.569792   47297 start.go:303] post-start completed in 156.000078ms
	I0907 00:51:32.569819   47297 fix.go:56] fixHost completed within 20.830399155s
	I0907 00:51:32.569860   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.573180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573599   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.573653   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573846   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.574100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574292   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574470   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.574658   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.575266   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.575282   47297 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:32.716793   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047892.656226759
	
	I0907 00:51:32.716819   47297 fix.go:206] guest clock: 1694047892.656226759
	I0907 00:51:32.716829   47297 fix.go:219] Guest: 2023-09-07 00:51:32.656226759 +0000 UTC Remote: 2023-09-07 00:51:32.569839112 +0000 UTC m=+181.933138455 (delta=86.387647ms)
	I0907 00:51:32.716855   47297 fix.go:190] guest clock delta is within tolerance: 86.387647ms
	I0907 00:51:32.716868   47297 start.go:83] releasing machines lock for "default-k8s-diff-port-773466", held for 20.977496549s
	I0907 00:51:32.716900   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.717205   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:32.720353   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.720794   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.720825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.721001   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721675   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721767   47297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:32.721813   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.721925   47297 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:32.721951   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.724909   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725154   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725464   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725510   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725626   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725808   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.725825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725845   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725869   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725967   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726058   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.726164   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.726216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726352   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.845353   47297 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:32.851616   47297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:33.005642   47297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:33.013527   47297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:33.013603   47297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:33.033433   47297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:33.033467   47297 start.go:466] detecting cgroup driver to use...
	I0907 00:51:33.033538   47297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:33.055861   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:33.073405   47297 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:33.073477   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:33.090484   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:33.104735   47297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:33.245072   47297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:33.411559   47297 docker.go:212] disabling docker service ...
	I0907 00:51:33.411625   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:33.429768   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:33.446597   47297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:33.581915   47297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:33.704648   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:33.721447   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:33.740243   47297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:33.740330   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.750871   47297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:33.750937   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.761620   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.774350   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.787718   47297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:33.802740   47297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:33.814899   47297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:33.814975   47297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:33.832422   47297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:33.844513   47297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:34.020051   47297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:34.252339   47297 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:34.252415   47297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:34.258055   47297 start.go:534] Will wait 60s for crictl version
	I0907 00:51:34.258179   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:51:34.262511   47297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:34.304552   47297 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:34.304626   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.376009   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.448097   47297 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:29.972856   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.178016   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.291593   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.385791   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:30.385865   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.404991   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.926995   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.427043   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.927049   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.426422   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.927274   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.955713   46833 api_server.go:72] duration metric: took 2.569919035s to wait for apiserver process to appear ...
	I0907 00:51:32.955739   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:32.955757   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.956284   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:32.956316   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.957189   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:33.457905   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:30.750097   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:30.784742   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:30.828002   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:30.852490   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:30.852534   46768 system_pods.go:61] "coredns-5dd5756b68-6ndjc" [8f1f8224-b8b4-4fb6-8f6b-2f4a0fb18e17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:30.852547   46768 system_pods.go:61] "etcd-no-preload-321164" [c4b2427c-d882-4d29-af41-553961e5ee48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:30.852559   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [339ca32b-a5a1-474c-a5db-c35e7f87506d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:30.852569   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [36241c8a-13ce-4e68-887b-ed929258d688] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:30.852581   46768 system_pods.go:61] "kube-proxy-f7dm4" [69308cf3-c18e-4edb-b0ea-c7f34a51aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:30.852595   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [e9b14f0e-7789-4d1d-9a15-02c88d4a1e3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:30.852606   46768 system_pods.go:61] "metrics-server-57f55c9bc5-s95n2" [938af7b2-936b-495c-84c9-d580ae646926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:30.852622   46768 system_pods.go:61] "storage-provisioner" [70c690a6-a383-4b3f-9817-954056580009] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:30.852633   46768 system_pods.go:74] duration metric: took 24.608458ms to wait for pod list to return data ...
	I0907 00:51:30.852646   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:30.860785   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:30.860811   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:30.860821   46768 node_conditions.go:105] duration metric: took 8.167675ms to run NodePressure ...
	I0907 00:51:30.860837   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:31.343033   46768 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349908   46768 kubeadm.go:787] kubelet initialised
	I0907 00:51:31.349936   46768 kubeadm.go:788] duration metric: took 6.87538ms waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349944   46768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:31.366931   46768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:33.392559   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:34.449546   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:34.452803   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453196   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:34.453226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453551   47297 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:34.459166   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:34.475045   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:34.475159   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:34.525380   47297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:34.525495   47297 ssh_runner.go:195] Run: which lz4
	I0907 00:51:34.530921   47297 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:34.537992   47297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:34.538062   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:34.298412   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting to get IP...
	I0907 00:51:34.299510   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.300108   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.300166   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.300103   48085 retry.go:31] will retry after 237.599934ms: waiting for machine to come up
	I0907 00:51:34.539798   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.540306   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.540406   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.540348   48085 retry.go:31] will retry after 321.765824ms: waiting for machine to come up
	I0907 00:51:34.864120   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.864735   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.864761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.864698   48085 retry.go:31] will retry after 485.375139ms: waiting for machine to come up
	I0907 00:51:35.351583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.352142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.352174   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.352081   48085 retry.go:31] will retry after 490.428576ms: waiting for machine to come up
	I0907 00:51:35.844432   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.844896   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.844921   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.844821   48085 retry.go:31] will retry after 610.440599ms: waiting for machine to come up
	I0907 00:51:36.456988   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:36.457697   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:36.457720   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:36.457634   48085 retry.go:31] will retry after 704.547341ms: waiting for machine to come up
	I0907 00:51:37.163551   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.163973   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.164001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.163926   48085 retry.go:31] will retry after 825.931424ms: waiting for machine to come up
	I0907 00:51:37.991936   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.992550   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.992583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.992489   48085 retry.go:31] will retry after 952.175868ms: waiting for machine to come up
	I0907 00:51:37.065943   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.065973   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.065987   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.176178   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.176213   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.457739   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.464386   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.464423   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:37.958094   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.966530   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.966561   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:38.458170   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:38.465933   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:51:38.477109   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:38.477135   46833 api_server.go:131] duration metric: took 5.521389594s to wait for apiserver health ...
	I0907 00:51:38.477143   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:38.477149   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:38.478964   46833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:38.480383   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:38.509844   46833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:38.549403   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:38.571430   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:38.571472   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:38.571491   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:38.571503   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:38.571563   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:38.571575   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:38.571592   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:38.571602   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:38.571613   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:38.571626   46833 system_pods.go:74] duration metric: took 22.19998ms to wait for pod list to return data ...
	I0907 00:51:38.571637   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:38.581324   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:38.581361   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:38.581373   46833 node_conditions.go:105] duration metric: took 9.730463ms to run NodePressure ...
	I0907 00:51:38.581393   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:39.140602   46833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:39.147994   46833 kubeadm.go:787] kubelet initialised
	I0907 00:51:39.148025   46833 kubeadm.go:788] duration metric: took 7.397807ms waiting for restarted kubelet to initialise ...
	I0907 00:51:39.148034   46833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:39.157241   46833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.172898   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172935   46833 pod_ready.go:81] duration metric: took 15.665673ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.172947   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172958   46833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.180630   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180666   46833 pod_ready.go:81] duration metric: took 7.698054ms waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.180679   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180692   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.202626   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202658   46833 pod_ready.go:81] duration metric: took 21.956163ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.202671   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202699   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.210817   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210849   46833 pod_ready.go:81] duration metric: took 8.138129ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.210860   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210882   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.801924   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801951   46833 pod_ready.go:81] duration metric: took 591.060955ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.801963   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801970   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:35.403877   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.394774   46768 pod_ready.go:92] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:36.394823   46768 pod_ready.go:81] duration metric: took 5.027852065s waiting for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:36.394839   46768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:38.429614   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.550649   47297 crio.go:444] Took 2.019779 seconds to copy over tarball
	I0907 00:51:36.550726   47297 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:40.133828   47297 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.583074443s)
	I0907 00:51:40.133861   47297 crio.go:451] Took 3.583177 seconds to extract the tarball
	I0907 00:51:40.133872   47297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:40.177675   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:40.230574   47297 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:40.230594   47297 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:40.230654   47297 ssh_runner.go:195] Run: crio config
	I0907 00:51:40.296445   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:51:40.296473   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:40.296497   47297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:40.296519   47297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-773466 NodeName:default-k8s-diff-port-773466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:40.296709   47297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-773466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:40.296793   47297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-773466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0907 00:51:40.296850   47297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:40.307543   47297 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:40.307642   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:40.318841   47297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0907 00:51:40.337125   47297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:40.354910   47297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0907 00:51:40.375283   47297 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:40.380206   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:40.394943   47297 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466 for IP: 192.168.39.96
	I0907 00:51:40.394980   47297 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.395194   47297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:40.395231   47297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:40.395295   47297 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.key
	I0907 00:51:40.410649   47297 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key.e8bbde58
	I0907 00:51:40.410724   47297 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key
	I0907 00:51:40.410868   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:40.410904   47297 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:40.410916   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:40.410942   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:40.410963   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:40.410985   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:40.411038   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:40.411575   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:40.441079   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:51:40.465854   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:40.495221   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:40.521493   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:40.548227   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:40.574366   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:40.599116   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:40.624901   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:40.650606   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:40.690154   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690183   46833 pod_ready.go:81] duration metric: took 888.205223ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.690194   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690204   46833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:40.697723   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697750   46833 pod_ready.go:81] duration metric: took 7.538932ms waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.697761   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697773   46833 pod_ready.go:38] duration metric: took 1.549726748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:40.697793   46833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:51:40.709255   46833 ops.go:34] apiserver oom_adj: -16
	I0907 00:51:40.709281   46833 kubeadm.go:640] restartCluster took 21.865468537s
	I0907 00:51:40.709290   46833 kubeadm.go:406] StartCluster complete in 21.917781616s
	I0907 00:51:40.709309   46833 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.709403   46833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:51:40.712326   46833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.808025   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:51:40.808158   46833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:51:40.808236   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:40.808285   46833 addons.go:69] Setting metrics-server=true in profile "embed-certs-546209"
	I0907 00:51:40.808309   46833 addons.go:231] Setting addon metrics-server=true in "embed-certs-546209"
	W0907 00:51:40.808317   46833 addons.go:240] addon metrics-server should already be in state true
	I0907 00:51:40.808252   46833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-546209"
	I0907 00:51:40.808340   46833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-546209"
	W0907 00:51:40.808354   46833 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:51:40.808375   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808390   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808257   46833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-546209"
	I0907 00:51:40.808493   46833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-546209"
	I0907 00:51:40.809864   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.809936   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810411   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810477   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810518   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810526   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.827159   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0907 00:51:40.827608   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0907 00:51:40.827784   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828059   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828326   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828354   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828556   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828579   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828955   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829067   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829670   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.829715   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.829932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.831070   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0907 00:51:40.831543   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.832142   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.832161   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.832527   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.834743   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.834801   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.853510   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0907 00:51:40.854194   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0907 00:51:40.854261   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.854987   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855019   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.855102   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.855381   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.855745   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.855791   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855808   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.856430   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.856882   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.858468   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.154848   46833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:51:40.859116   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.300012   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:51:41.362259   46833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:41.362296   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:51:41.362332   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.460930   46833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.460961   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:51:41.460988   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.464836   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465151   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465419   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465455   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465590   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465621   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465764   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465979   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466055   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466196   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466276   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.466309   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.587470   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.594683   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:51:41.594709   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:51:41.621438   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:51:41.621471   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:51:41.664886   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.664910   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:51:41.691795   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.886942   46833 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.078877765s)
	I0907 00:51:41.887038   46833 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:51:41.898851   46833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-546209" context rescaled to 1 replicas
	I0907 00:51:41.898900   46833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:51:42.014441   46833 out.go:177] * Verifying Kubernetes components...
	I0907 00:51:38.946740   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:38.947268   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:38.947292   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:38.947211   48085 retry.go:31] will retry after 1.334104337s: waiting for machine to come up
	I0907 00:51:40.282730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:40.283209   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:40.283233   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:40.283168   48085 retry.go:31] will retry after 1.521256667s: waiting for machine to come up
	I0907 00:51:41.806681   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:41.807182   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:41.807211   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:41.807126   48085 retry.go:31] will retry after 1.907600342s: waiting for machine to come up
	I0907 00:51:42.132070   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:51:42.150876   46833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-546209"
	W0907 00:51:42.150905   46833 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:51:42.150935   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:42.151329   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.151357   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.172605   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0907 00:51:42.173122   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.173662   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.173709   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.174155   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.174813   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.174877   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.196701   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0907 00:51:42.197287   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.197859   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.197882   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.198246   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.198418   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:42.200558   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:42.200942   46833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:42.200954   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:51:42.200967   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:42.204259   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.204952   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:42.204975   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:42.205009   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.205139   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:42.205280   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:42.205405   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:42.377838   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:43.286666   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.699154782s)
	I0907 00:51:43.286720   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.286734   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.287148   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.287174   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.287190   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.287210   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.287220   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.288970   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.289008   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.289021   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.436691   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.744844788s)
	I0907 00:51:43.436717   46833 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.304610389s)
	I0907 00:51:43.436744   46833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:43.436758   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436775   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.436862   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05899604s)
	I0907 00:51:43.436883   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436893   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438856   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.438887   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438903   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438907   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438914   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438919   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438924   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438934   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439020   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.439206   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439219   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439231   46833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-546209"
	I0907 00:51:43.439266   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439277   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439290   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.439299   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439502   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439513   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.442917   46833 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0907 00:51:43.444226   46833 addons.go:502] enable addons completed in 2.636061813s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0907 00:51:40.924494   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:42.925582   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:40.679951   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:40.859542   47297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:40.881658   47297 ssh_runner.go:195] Run: openssl version
	I0907 00:51:40.888518   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:40.902200   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908038   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908106   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.914418   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:40.927511   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:40.941360   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947556   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947622   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.953780   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:40.966576   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:40.981447   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989719   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989779   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:41.000685   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:41.017936   47297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:41.023280   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:41.029915   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:41.038011   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:41.044570   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:41.052534   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:41.060580   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:41.068664   47297 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:41.068776   47297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:41.068897   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:41.111849   47297 cri.go:89] found id: ""
	I0907 00:51:41.111923   47297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:41.126171   47297 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:41.126193   47297 kubeadm.go:636] restartCluster start
	I0907 00:51:41.126249   47297 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:41.138401   47297 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.139882   47297 kubeconfig.go:92] found "default-k8s-diff-port-773466" server: "https://192.168.39.96:8444"
	I0907 00:51:41.142907   47297 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:41.154285   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.154346   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.168992   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.169012   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.169057   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.183283   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.683942   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.684036   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.701647   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.183800   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.183882   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.213176   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.683460   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.683550   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.701805   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.184099   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.184206   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.202359   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.683466   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.683541   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.697133   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.183663   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.183750   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.201236   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.684320   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.684411   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.698198   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:45.183451   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.183533   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.197529   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.716005   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:43.716632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:43.716668   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:43.716570   48085 retry.go:31] will retry after 3.526983217s: waiting for machine to come up
	I0907 00:51:47.245213   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:47.245615   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:47.245645   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:47.245561   48085 retry.go:31] will retry after 3.453934877s: waiting for machine to come up
	I0907 00:51:45.450760   46833 node_ready.go:58] node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:47.949024   46833 node_ready.go:49] node "embed-certs-546209" has status "Ready":"True"
	I0907 00:51:47.949053   46833 node_ready.go:38] duration metric: took 4.512298071s waiting for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:47.949063   46833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:47.956755   46833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964323   46833 pod_ready.go:92] pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:47.964345   46833 pod_ready.go:81] duration metric: took 7.56298ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964356   46833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425347   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.425370   46768 pod_ready.go:81] duration metric: took 9.030524984s waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425380   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432508   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.432531   46768 pod_ready.go:81] duration metric: took 7.145112ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432545   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441245   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.441265   46768 pod_ready.go:81] duration metric: took 8.713177ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441275   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446603   46768 pod_ready.go:92] pod "kube-proxy-f7dm4" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.446627   46768 pod_ready.go:81] duration metric: took 5.346628ms waiting for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446641   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453061   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.453091   46768 pod_ready.go:81] duration metric: took 6.442457ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453104   46768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.730093   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:45.684191   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.684287   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.702020   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.183587   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.183697   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.201390   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.683442   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.683519   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.699015   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.183908   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.183998   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.196617   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.683929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.683991   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.696499   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.183929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.184000   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.197425   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.683932   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.684019   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.696986   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.184149   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.184224   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.197363   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.684066   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.684152   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.697853   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.183372   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.183490   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.195818   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.700500   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:50.700920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:50.700939   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:50.700882   48085 retry.go:31] will retry after 4.6319983s: waiting for machine to come up
	I0907 00:51:49.984505   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:51.987061   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:53.485331   46833 pod_ready.go:92] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.485356   46833 pod_ready.go:81] duration metric: took 5.520993929s waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.485368   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491351   46833 pod_ready.go:92] pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.491371   46833 pod_ready.go:81] duration metric: took 5.996687ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491387   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496425   46833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.496448   46833 pod_ready.go:81] duration metric: took 5.054087ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496460   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504963   46833 pod_ready.go:92] pod "kube-proxy-47255" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.504982   46833 pod_ready.go:81] duration metric: took 8.515814ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504990   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550180   46833 pod_ready.go:92] pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.550208   46833 pod_ready.go:81] duration metric: took 45.211992ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550222   46833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:50.229069   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:52.233340   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:54.728824   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:50.683740   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.683806   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.695528   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:51.154940   47297 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:51.154990   47297 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:51.155002   47297 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:51.155052   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:51.190293   47297 cri.go:89] found id: ""
	I0907 00:51:51.190351   47297 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:51.207237   47297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:51.216623   47297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:51.216671   47297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226376   47297 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226399   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.352763   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.879625   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.090367   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.169714   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.258757   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:52.258861   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.274881   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.799083   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.298600   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.798807   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.299419   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.798660   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.824175   47297 api_server.go:72] duration metric: took 2.565415526s to wait for apiserver process to appear ...
	I0907 00:51:54.824203   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:54.824222   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:55.335922   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336311   46354 main.go:141] libmachine: (old-k8s-version-940806) Found IP for machine: 192.168.83.245
	I0907 00:51:55.336325   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserving static IP address...
	I0907 00:51:55.336336   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has current primary IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336816   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.336872   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserved static IP address: 192.168.83.245
	I0907 00:51:55.336893   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | skip adding static IP to network mk-old-k8s-version-940806 - found existing host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"}
	I0907 00:51:55.336909   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting for SSH to be available...
	I0907 00:51:55.336919   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Getting to WaitForSSH function...
	I0907 00:51:55.339323   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.339768   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339880   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH client type: external
	I0907 00:51:55.339907   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa (-rw-------)
	I0907 00:51:55.339946   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:55.339964   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | About to run SSH command:
	I0907 00:51:55.340001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | exit 0
	I0907 00:51:55.483023   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:55.483362   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetConfigRaw
	I0907 00:51:55.484121   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.487091   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487590   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.487621   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487863   46354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:51:55.488067   46354 machine.go:88] provisioning docker machine ...
	I0907 00:51:55.488088   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:55.488332   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488525   46354 buildroot.go:166] provisioning hostname "old-k8s-version-940806"
	I0907 00:51:55.488551   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488707   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.491136   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491567   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.491600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491818   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.491950   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492058   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492133   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.492237   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.492685   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.492705   46354 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-940806 && echo "old-k8s-version-940806" | sudo tee /etc/hostname
	I0907 00:51:55.648589   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-940806
	
	I0907 00:51:55.648628   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.651624   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652046   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.652094   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652282   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.652472   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652654   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652813   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.652977   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.653628   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.653657   46354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-940806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-940806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-940806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:55.805542   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:55.805573   46354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:55.805607   46354 buildroot.go:174] setting up certificates
	I0907 00:51:55.805617   46354 provision.go:83] configureAuth start
	I0907 00:51:55.805629   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.805907   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.808800   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.809175   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809299   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.811385   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811785   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.811812   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811980   46354 provision.go:138] copyHostCerts
	I0907 00:51:55.812089   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:55.812104   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:55.812172   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:55.812287   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:55.812297   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:55.812321   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:55.812418   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:55.812427   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:55.812463   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:55.812538   46354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-940806 san=[192.168.83.245 192.168.83.245 localhost 127.0.0.1 minikube old-k8s-version-940806]
	I0907 00:51:55.920274   46354 provision.go:172] copyRemoteCerts
	I0907 00:51:55.920327   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:55.920348   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.923183   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923599   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.923632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923816   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.924011   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.924174   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.924335   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.020317   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:56.048299   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:51:56.075483   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:56.101118   46354 provision.go:86] duration metric: configureAuth took 295.488336ms
	I0907 00:51:56.101150   46354 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:56.101338   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:51:56.101407   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.104235   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.104640   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104878   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.105093   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105306   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105495   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.105668   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.106199   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.106217   46354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:56.435571   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:56.435644   46354 machine.go:91] provisioned docker machine in 947.562946ms
	I0907 00:51:56.435662   46354 start.go:300] post-start starting for "old-k8s-version-940806" (driver="kvm2")
	I0907 00:51:56.435679   46354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:56.435712   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.436041   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:56.436083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.439187   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439537   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.439563   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439888   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.440116   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.440285   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.440427   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.542162   46354 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:56.546357   46354 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:56.546375   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:56.546435   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:56.546511   46354 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:56.546648   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:56.556125   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:56.577844   46354 start.go:303] post-start completed in 142.166343ms
	I0907 00:51:56.577874   46354 fix.go:56] fixHost completed within 23.860860531s
	I0907 00:51:56.577898   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.580726   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581062   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.581090   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581221   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.581540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581742   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.582113   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.582532   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.582553   46354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:56.715584   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047916.695896692
	
	I0907 00:51:56.715607   46354 fix.go:206] guest clock: 1694047916.695896692
	I0907 00:51:56.715615   46354 fix.go:219] Guest: 2023-09-07 00:51:56.695896692 +0000 UTC Remote: 2023-09-07 00:51:56.57787864 +0000 UTC m=+363.381197654 (delta=118.018052ms)
	I0907 00:51:56.715632   46354 fix.go:190] guest clock delta is within tolerance: 118.018052ms
	I0907 00:51:56.715639   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 23.998669865s
	I0907 00:51:56.715658   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.715909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:56.718637   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.718992   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.719030   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.719203   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719646   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719852   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719935   46354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:56.719980   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.720050   46354 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:56.720068   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.722463   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722752   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722809   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.722850   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723041   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723208   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723241   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.723282   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723394   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723406   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723599   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.723632   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723797   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723956   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.835700   46354 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:56.841554   46354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:56.988658   46354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:56.995421   46354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:56.995495   46354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:57.011588   46354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:57.011608   46354 start.go:466] detecting cgroup driver to use...
	I0907 00:51:57.011669   46354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:57.029889   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:57.043942   46354 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:57.044002   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:57.056653   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:57.069205   46354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:57.184510   46354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:57.323399   46354 docker.go:212] disabling docker service ...
	I0907 00:51:57.323477   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:57.336506   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:57.348657   46354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:57.464450   46354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:57.577763   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:57.590934   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:57.609445   46354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:51:57.609500   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.619112   46354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:57.619173   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.629272   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.638702   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.648720   46354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:57.659046   46354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:57.667895   46354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:57.667971   46354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:57.681673   46354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:57.690907   46354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:57.801113   46354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:57.978349   46354 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:57.978432   46354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:57.983665   46354 start.go:534] Will wait 60s for crictl version
	I0907 00:51:57.983714   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:51:57.988244   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:58.019548   46354 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:58.019616   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.068229   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.118554   46354 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0907 00:51:58.120322   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:58.122944   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123321   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:58.123377   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123569   46354 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:58.128115   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:58.140862   46354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0907 00:51:58.140933   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:58.182745   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:51:58.182829   46354 ssh_runner.go:195] Run: which lz4
	I0907 00:51:58.188491   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:58.193202   46354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:58.193237   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0907 00:51:55.862451   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.363582   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.511655   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.511686   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:58.511699   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:58.549405   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.549442   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:59.050120   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.057915   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.057946   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:59.550150   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.559928   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.559970   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:52:00.050535   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:52:00.060556   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:52:00.069872   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:52:00.069898   47297 api_server.go:131] duration metric: took 5.245689478s to wait for apiserver health ...
	I0907 00:52:00.069906   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:52:00.069911   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:00.071700   47297 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:56.730172   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.731973   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:00.073858   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:00.098341   47297 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:00.120355   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:00.137820   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:52:00.137936   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:52:00.137967   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:52:00.137989   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:52:00.138007   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:52:00.138018   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:52:00.138032   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:52:00.138045   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:52:00.138058   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:52:00.138069   47297 system_pods.go:74] duration metric: took 17.695163ms to wait for pod list to return data ...
	I0907 00:52:00.138082   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:00.145755   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:00.145790   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:00.145803   47297 node_conditions.go:105] duration metric: took 7.711411ms to run NodePressure ...
	I0907 00:52:00.145825   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:00.468823   47297 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476107   47297 kubeadm.go:787] kubelet initialised
	I0907 00:52:00.476130   47297 kubeadm.go:788] duration metric: took 7.282541ms waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476138   47297 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:00.483366   47297 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.495045   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495072   47297 pod_ready.go:81] duration metric: took 11.633116ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.495083   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495092   47297 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.500465   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500488   47297 pod_ready.go:81] duration metric: took 5.386997ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.500498   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500504   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.507318   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507392   47297 pod_ready.go:81] duration metric: took 6.878563ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.507416   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507436   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.527784   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527820   47297 pod_ready.go:81] duration metric: took 20.36412ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.527833   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527844   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.936895   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936926   47297 pod_ready.go:81] duration metric: took 409.073374ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.936938   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936947   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.325746   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325777   47297 pod_ready.go:81] duration metric: took 388.819699ms waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.325787   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325798   47297 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.725791   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725828   47297 pod_ready.go:81] duration metric: took 400.019773ms waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.725840   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725852   47297 pod_ready.go:38] duration metric: took 1.249702286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:01.725871   47297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:52:01.742792   47297 ops.go:34] apiserver oom_adj: -16
	I0907 00:52:01.742816   47297 kubeadm.go:640] restartCluster took 20.616616394s
	I0907 00:52:01.742825   47297 kubeadm.go:406] StartCluster complete in 20.674170679s
	I0907 00:52:01.742843   47297 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.742936   47297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:52:01.744735   47297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.744998   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:52:01.745113   47297 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:52:01.745212   47297 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745218   47297 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745232   47297 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745240   47297 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:52:01.745232   47297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-773466"
	I0907 00:52:01.745268   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:52:01.745301   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745248   47297 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745432   47297 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745442   47297 addons.go:240] addon metrics-server should already be in state true
	I0907 00:52:01.745489   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745709   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745718   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745753   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745813   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745895   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745930   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.755156   47297 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-773466" context rescaled to 1 replicas
	I0907 00:52:01.755193   47297 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:52:01.757452   47297 out.go:177] * Verifying Kubernetes components...
	I0907 00:52:01.759076   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:52:01.763067   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0907 00:52:01.763578   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.764125   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.764147   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.764483   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.764668   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.764804   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0907 00:52:01.765385   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.765972   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.765988   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.766336   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.768468   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0907 00:52:01.768952   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.768985   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.769339   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.769827   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.769860   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.770129   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.770612   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.770641   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.782323   47297 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.782353   47297 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:52:01.782387   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.782822   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.782858   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.788535   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0907 00:52:01.789169   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.789826   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.789845   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.790158   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0907 00:52:01.790340   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.790544   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.790616   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.791036   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.791055   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.791552   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.791726   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.793270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.796517   47297 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:52:01.794011   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.798239   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:52:01.798266   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:52:01.798291   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800176   47297 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:59.928894   46354 crio.go:444] Took 1.740438 seconds to copy over tarball
	I0907 00:51:59.928974   46354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:52:03.105945   46354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.176929999s)
	I0907 00:52:03.105977   46354 crio.go:451] Took 3.177055 seconds to extract the tarball
	I0907 00:52:03.105987   46354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:52:03.150092   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:52:03.193423   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:52:03.193450   46354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:52:03.193525   46354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.193544   46354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.193564   46354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.193730   46354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.193799   46354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.193802   46354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:52:03.193829   46354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.193736   46354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.194948   46354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.195017   46354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.194949   46354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.195642   46354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.195763   46354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.195814   46354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.195843   46354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:52:03.195874   46354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:01.801952   47297 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.801969   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:52:01.801989   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800897   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0907 00:52:01.801662   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802261   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.802286   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802332   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.802683   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.802922   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.802961   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.803124   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.804246   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.804272   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.804654   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.804870   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805283   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.805314   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805418   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.805448   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.805541   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.805723   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.805889   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.806052   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.822423   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0907 00:52:01.822847   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.823441   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.823459   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.823843   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.824036   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.825740   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.826032   47297 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:01.826051   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:52:01.826076   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.829041   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829284   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.829310   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829407   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.829591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.829712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.830194   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.956646   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:52:01.956669   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:52:01.974183   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.978309   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:02.048672   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:52:02.048708   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:52:02.088069   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:02.088099   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:52:02.142271   47297 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:02.142668   47297 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:52:02.197788   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:03.587076   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.612851341s)
	I0907 00:52:03.587130   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587147   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608805294s)
	I0907 00:52:03.587182   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587210   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587452   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587493   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587514   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587525   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587535   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587751   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587765   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587892   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587905   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587925   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587935   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588252   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.588277   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588285   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.588297   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.588305   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588543   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588555   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648373   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450538249s)
	I0907 00:52:03.648433   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648449   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.648789   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.648824   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.648833   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648848   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648858   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.649118   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.649137   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.649153   47297 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-773466"
	I0907 00:52:03.834785   47297 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:52:00.858996   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:02.861983   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:01.228807   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:03.229017   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:04.154749   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:04.260530   47297 addons.go:502] enable addons completed in 2.51536834s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:52:03.398538   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.480702   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.482201   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.482206   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0907 00:52:03.482815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.484155   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.484815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.698892   46354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0907 00:52:03.698936   46354 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.698938   46354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0907 00:52:03.698965   46354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0907 00:52:03.699028   46354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.698975   46354 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0907 00:52:03.698982   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699069   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699084   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.703734   46354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0907 00:52:03.703764   46354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.703796   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729259   46354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0907 00:52:03.729295   46354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.729331   46354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0907 00:52:03.729366   46354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.729373   46354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0907 00:52:03.729394   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.729398   46354 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.729404   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729336   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729441   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729491   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.729519   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0907 00:52:03.729601   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.791169   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0907 00:52:03.814632   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0907 00:52:03.814660   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.814689   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.814747   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:52:03.814799   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.814839   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0907 00:52:03.814841   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876039   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0907 00:52:03.876095   46354 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0907 00:52:03.876082   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0907 00:52:03.876114   46354 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876153   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0907 00:52:03.876158   46354 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0907 00:52:04.549426   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:05.733437   46354 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.85724297s)
	I0907 00:52:05.733479   46354 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0907 00:52:05.733519   46354 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.184052604s)
	I0907 00:52:05.733568   46354 cache_images.go:92] LoadImages completed in 2.540103614s
	W0907 00:52:05.733639   46354 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0907 00:52:05.733723   46354 ssh_runner.go:195] Run: crio config
	I0907 00:52:05.795752   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:05.795780   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:05.795801   46354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:52:05.795824   46354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-940806 NodeName:old-k8s-version-940806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0907 00:52:05.795975   46354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-940806"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-940806
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.245:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:52:05.796074   46354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-940806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:52:05.796135   46354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0907 00:52:05.807772   46354 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:52:05.807864   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:52:05.818185   46354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0907 00:52:05.835526   46354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:52:05.853219   46354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0907 00:52:05.873248   46354 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I0907 00:52:05.877640   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:52:05.890975   46354 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806 for IP: 192.168.83.245
	I0907 00:52:05.891009   46354 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:05.891171   46354 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:52:05.891226   46354 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:52:05.891327   46354 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.key
	I0907 00:52:05.891407   46354 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key.8de8e89b
	I0907 00:52:05.891459   46354 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key
	I0907 00:52:05.891667   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:52:05.891713   46354 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:52:05.891729   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:52:05.891766   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:52:05.891801   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:52:05.891836   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:52:05.891913   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:52:05.892547   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:52:05.917196   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:52:05.942387   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:52:05.965551   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:52:05.987658   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:52:06.012449   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:52:06.037055   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:52:06.061051   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:52:06.085002   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:52:06.109132   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:52:06.132091   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:52:06.155215   46354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:52:06.173122   46354 ssh_runner.go:195] Run: openssl version
	I0907 00:52:06.178736   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:52:06.189991   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194548   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194596   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.200538   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:52:06.212151   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:52:06.224356   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.229976   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.230037   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.236389   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:52:06.248369   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:52:06.259325   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264451   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264514   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.270564   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:52:06.282506   46354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:52:06.287280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:52:06.293280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:52:06.299272   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:52:06.305342   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:52:06.311194   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:52:06.317634   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:52:06.323437   46354 kubeadm.go:404] StartCluster: {Name:old-k8s-version-940806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:52:06.323591   46354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:52:06.323668   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:06.358285   46354 cri.go:89] found id: ""
	I0907 00:52:06.358357   46354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:52:06.368975   46354 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:52:06.368997   46354 kubeadm.go:636] restartCluster start
	I0907 00:52:06.369060   46354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:52:06.379841   46354 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.380906   46354 kubeconfig.go:92] found "old-k8s-version-940806" server: "https://192.168.83.245:8443"
	I0907 00:52:06.383428   46354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:52:06.393862   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.393912   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.406922   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.406947   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.406995   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.419930   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.920685   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.920763   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.934327   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.420551   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.420652   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.438377   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.920500   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.920598   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.936835   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:05.363807   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.869141   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:05.229666   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.729895   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:09.731464   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:06.656552   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:09.155326   47297 node_ready.go:49] node "default-k8s-diff-port-773466" has status "Ready":"True"
	I0907 00:52:09.155347   47297 node_ready.go:38] duration metric: took 7.013040488s waiting for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:09.155355   47297 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:09.164225   47297 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170406   47297 pod_ready.go:92] pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.170437   47297 pod_ready.go:81] duration metric: took 6.189088ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170450   47297 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178363   47297 pod_ready.go:92] pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.178390   47297 pod_ready.go:81] duration metric: took 7.932283ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178403   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184875   47297 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.184891   47297 pod_ready.go:81] duration metric: took 6.482032ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184900   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192246   47297 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.192265   47297 pod_ready.go:81] duration metric: took 7.359919ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192274   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556032   47297 pod_ready.go:92] pod "kube-proxy-5bh7n" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.556064   47297 pod_ready.go:81] duration metric: took 363.783194ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556077   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:08.420749   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.420813   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.434111   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:08.920795   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.920891   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.934515   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.420076   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.420167   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.433668   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.920090   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.920185   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.934602   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.420086   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.420186   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.434617   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.920124   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.920196   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.933372   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.420990   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.421072   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.435087   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.920579   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.920653   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.933614   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.420100   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.420192   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.434919   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.920816   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.920911   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.934364   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.357508   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.357966   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.358965   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.227826   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.228106   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:11.862581   47297 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.363573   47297 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:12.363593   47297 pod_ready.go:81] duration metric: took 2.807509276s waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:12.363602   47297 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:14.763624   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:13.420355   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.420427   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.434047   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:13.920675   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.920757   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.933725   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.420169   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.420244   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.433012   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.920490   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.920603   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.934208   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.420724   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.420807   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.433542   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.920040   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.920114   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.933104   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:16.394845   46354 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:52:16.394878   46354 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:52:16.394891   46354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:52:16.394939   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:16.430965   46354 cri.go:89] found id: ""
	I0907 00:52:16.431029   46354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:52:16.449241   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:52:16.459891   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:52:16.459973   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470006   46354 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470033   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:16.591111   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.262647   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.481491   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.601432   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.722907   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:52:17.723000   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:17.735327   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:16.360886   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.860619   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:16.230019   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.230274   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:17.262772   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:19.264986   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.254002   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:18.753686   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.253956   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.290590   46354 api_server.go:72] duration metric: took 1.567681708s to wait for apiserver process to appear ...
	I0907 00:52:19.290614   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:52:19.290632   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291177   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.291217   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291691   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.792323   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:21.357716   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:23.358355   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:20.728569   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:22.730042   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:21.763571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.264990   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.793514   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0907 00:52:24.793568   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:24.939397   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:52:24.939429   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:52:25.292624   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.350968   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.351004   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:25.792573   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.799666   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.799697   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:26.292258   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:26.301200   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:52:26.313982   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:52:26.314007   46354 api_server.go:131] duration metric: took 7.023387143s to wait for apiserver health ...
	I0907 00:52:26.314016   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:26.314021   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:26.316011   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:52:26.317496   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:26.335726   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:26.373988   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:26.393836   46354 system_pods.go:59] 7 kube-system pods found
	I0907 00:52:26.393861   46354 system_pods.go:61] "coredns-5644d7b6d9-56l68" [ab956d84-2998-42a4-b9ed-b71bc43c9730] Running
	I0907 00:52:26.393866   46354 system_pods.go:61] "etcd-old-k8s-version-940806" [6234bc4e-66d0-4fb6-8631-b45ee56b774c] Running
	I0907 00:52:26.393870   46354 system_pods.go:61] "kube-apiserver-old-k8s-version-940806" [303d2368-1964-4bdb-9d46-91602d6c52b4] Running
	I0907 00:52:26.393875   46354 system_pods.go:61] "kube-controller-manager-old-k8s-version-940806" [7a193f1e-8650-453b-bfa5-d4af3a8bfbc3] Running
	I0907 00:52:26.393878   46354 system_pods.go:61] "kube-proxy-2d8pb" [1689f3e9-0487-422e-a450-9c96595cea00] Running
	I0907 00:52:26.393882   46354 system_pods.go:61] "kube-scheduler-old-k8s-version-940806" [cbd69cd2-3fc6-418b-aa4f-ef19b1b903e1] Running
	I0907 00:52:26.393886   46354 system_pods.go:61] "storage-provisioner" [f313e63f-6c39-4b81-86d1-8054fd6af338] Running
	I0907 00:52:26.393891   46354 system_pods.go:74] duration metric: took 19.879283ms to wait for pod list to return data ...
	I0907 00:52:26.393900   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:26.401474   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:26.401502   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:26.401512   46354 node_conditions.go:105] duration metric: took 7.606706ms to run NodePressure ...
	I0907 00:52:26.401529   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:26.811645   46354 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:26.817493   46354 retry.go:31] will retry after 177.884133ms: kubelet not initialised
	I0907 00:52:26.999917   46354 retry.go:31] will retry after 499.371742ms: kubelet not initialised
	I0907 00:52:27.504386   46354 retry.go:31] will retry after 692.030349ms: kubelet not initialised
	I0907 00:52:28.201498   46354 retry.go:31] will retry after 627.806419ms: kubelet not initialised
	I0907 00:52:25.358575   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.860612   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:25.229134   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.230538   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.729637   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:26.764040   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.264855   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:28.841483   46354 retry.go:31] will retry after 1.816521725s: kubelet not initialised
	I0907 00:52:30.664615   46354 retry.go:31] will retry after 1.888537042s: kubelet not initialised
	I0907 00:52:32.559591   46354 retry.go:31] will retry after 1.787314239s: kubelet not initialised
	I0907 00:52:30.358330   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.857719   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.229103   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.229797   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:31.265047   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:33.763354   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.353206   46354 retry.go:31] will retry after 5.20863166s: kubelet not initialised
	I0907 00:52:34.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:37.358005   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.229978   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.728934   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.264389   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.762232   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:39.567124   46354 retry.go:31] will retry after 8.04288108s: kubelet not initialised
	I0907 00:52:39.863004   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:42.359394   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.729770   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.236530   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.762994   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.263094   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.264328   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.616011   46354 retry.go:31] will retry after 4.959306281s: kubelet not initialised
	I0907 00:52:44.858665   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.359722   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.729067   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:48.228533   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.763985   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.263571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.580975   46354 retry.go:31] will retry after 19.653399141s: kubelet not initialised
	I0907 00:52:49.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.360050   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.361428   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.229168   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.229310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.229581   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.263685   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.762390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.857835   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.357322   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.728575   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.228623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.762553   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.263070   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.357560   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.358151   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.228910   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.728870   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.264341   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.764046   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.858279   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:07.861484   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.729314   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.229765   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:06.263532   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.763318   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.241966   46354 kubeadm.go:787] kubelet initialised
	I0907 00:53:12.242006   46354 kubeadm.go:788] duration metric: took 45.430332167s waiting for restarted kubelet to initialise ...
	I0907 00:53:12.242016   46354 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:53:12.247545   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253242   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.253264   46354 pod_ready.go:81] duration metric: took 5.697075ms waiting for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253276   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258467   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.258489   46354 pod_ready.go:81] duration metric: took 5.206456ms waiting for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258497   46354 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264371   46354 pod_ready.go:92] pod "etcd-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.264394   46354 pod_ready.go:81] duration metric: took 5.89143ms waiting for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264406   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269447   46354 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.269467   46354 pod_ready.go:81] duration metric: took 5.053466ms waiting for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269481   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638374   46354 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.638400   46354 pod_ready.go:81] duration metric: took 368.911592ms waiting for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638413   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039158   46354 pod_ready.go:92] pod "kube-proxy-2d8pb" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.039183   46354 pod_ready.go:81] duration metric: took 400.763103ms waiting for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039191   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:10.359605   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.361679   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:10.729293   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.229130   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:11.263595   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.268640   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.439450   46354 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.439477   46354 pod_ready.go:81] duration metric: took 400.279988ms waiting for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.439486   46354 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:15.746303   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.747193   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:14.858056   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:16.860373   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:19.361777   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.730623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:18.229790   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.763744   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.262360   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.246964   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.746507   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:21.361826   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.857891   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.729313   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.228479   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.263551   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:24.762509   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.246087   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:27.745946   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.858658   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.361105   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.732342   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.229971   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:26.763684   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.262971   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.746043   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.746133   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.857617   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.860863   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.728633   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.730094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.264742   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.764483   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.748648   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.246158   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.358908   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.361998   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.229141   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.729367   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.263505   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.264633   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.746190   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.751934   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:39.858993   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:41.860052   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.359421   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.228491   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:42.229143   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.229996   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.766539   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.264325   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.245475   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.245574   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.246524   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.857876   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.859569   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.230037   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.727940   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.763110   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.763211   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.264727   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:49.745339   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:51.746054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.859934   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:53.357432   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.729449   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.729731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.731191   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.763145   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.763847   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.246469   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.746034   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:55.357937   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.856743   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.227742   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.228654   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.764030   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.765416   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.746909   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.246396   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:02.357694   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:04.357907   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.229565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.729229   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.263126   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.764100   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.745703   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:05.745994   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.858308   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:09.357561   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.229604   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.727738   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.262721   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.263088   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.264022   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.246673   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.246999   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.746105   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:11.358384   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:13.358491   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.729593   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.732429   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.762306   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.263152   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:14.746491   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.245728   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.361153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.860338   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.229785   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.730926   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.733515   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.763593   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.264199   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.247271   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:21.251269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.360652   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.860291   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.229545   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.729109   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.264956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.764699   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:23.746737   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.747269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.357166   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.358248   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:26.729136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.226834   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.262945   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.763714   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:28.245784   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:30.245932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.745051   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.357600   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.361871   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:31.227731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:33.727721   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.262586   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.263485   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.745803   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.745877   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.858000   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.859206   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:35.729469   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.227947   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.763348   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.763533   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:39.245567   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.246549   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.859969   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.862293   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.228842   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.230064   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:44.732421   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.263587   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.762536   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.746104   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:46.247106   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.358648   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.858022   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.229847   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:49.729764   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.763352   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.263554   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.745911   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.746370   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.357129   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.357416   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.359626   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.228487   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.728565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.762919   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.764740   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.262939   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:53.248337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.746300   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.858127   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.358102   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.730045   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.227094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:57.263059   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.263696   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:58.247342   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:00.745494   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:02.748481   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.360153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.360737   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.227937   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.235852   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.263956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.246551   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.747587   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.858981   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.861146   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.729711   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.228310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.764163   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.263381   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.263936   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.247504   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.745798   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.360810   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.859446   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.229240   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.728782   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.729856   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.763565   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.263530   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.746534   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.246569   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.356953   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.358790   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:16.732983   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.228136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.264573   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.763137   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.745008   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.745932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.858109   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:22.358258   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.228589   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.729147   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.763580   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.746337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.748262   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:24.860943   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.357823   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.729423   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.731209   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.764235   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.263390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.254786   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.746056   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:29.859827   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:31.861387   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.862627   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.227830   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.227911   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:34.728680   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.762895   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.763333   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.262940   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.247352   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.247638   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.747011   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:36.356562   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:38.358379   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.227942   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.230445   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.264134   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.763848   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.245726   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.246951   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.858763   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.859176   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:41.729215   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.228235   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.263784   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.762310   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.747834   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:46.748669   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.361972   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:47.861601   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.453504   46768 pod_ready.go:81] duration metric: took 4m0.000384981s waiting for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:45.453536   46768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:45.453557   46768 pod_ready.go:38] duration metric: took 4m14.103603262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:45.453586   46768 kubeadm.go:640] restartCluster took 4m33.861797616s
	W0907 00:55:45.453681   46768 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:55:45.453721   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:55:46.762627   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:48.764174   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:49.247771   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:51.747171   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:50.361591   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:52.362641   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.550366   46833 pod_ready.go:81] duration metric: took 4m0.000125687s waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:53.550409   46833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:53.550421   46833 pod_ready.go:38] duration metric: took 4m5.601345022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:53.550444   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:55:53.550477   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:53.550553   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:53.601802   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:53.601823   46833 cri.go:89] found id: ""
	I0907 00:55:53.601831   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:53.601892   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.606465   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:53.606555   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:53.643479   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.643509   46833 cri.go:89] found id: ""
	I0907 00:55:53.643516   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:53.643562   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.648049   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:53.648101   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:53.679620   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:53.679648   46833 cri.go:89] found id: ""
	I0907 00:55:53.679658   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:53.679706   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.684665   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:53.684721   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:53.725282   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.725302   46833 cri.go:89] found id: ""
	I0907 00:55:53.725309   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:53.725364   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.729555   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:53.729627   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:53.761846   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:53.761875   46833 cri.go:89] found id: ""
	I0907 00:55:53.761883   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:53.761930   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.766451   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:53.766523   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:53.800099   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:53.800118   46833 cri.go:89] found id: ""
	I0907 00:55:53.800124   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:53.800168   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.804614   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:53.804676   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:53.841198   46833 cri.go:89] found id: ""
	I0907 00:55:53.841219   46833 logs.go:284] 0 containers: []
	W0907 00:55:53.841225   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:53.841230   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:53.841288   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:53.883044   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:53.883071   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:53.883077   46833 cri.go:89] found id: ""
	I0907 00:55:53.883085   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:53.883133   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.887172   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.891540   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:53.891566   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.944734   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:53.944765   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.979803   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:53.979832   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:54.015131   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:54.015159   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:54.062445   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:54.062478   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:54.097313   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:54.097343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:54.685400   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:54.685442   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:51.262853   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.764766   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.248875   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:56.746538   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.836523   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:54.836555   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:54.885972   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:54.886002   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:54.918966   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:54.919000   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:54.951966   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:54.951996   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:54.991382   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:54.991418   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:55.048526   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:55.048561   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:57.564574   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:55:57.579844   46833 api_server.go:72] duration metric: took 4m15.68090954s to wait for apiserver process to appear ...
	I0907 00:55:57.579867   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:55:57.579899   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:57.579963   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:57.619205   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:57.619225   46833 cri.go:89] found id: ""
	I0907 00:55:57.619235   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:57.619287   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.623884   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:57.623962   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:57.653873   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:57.653899   46833 cri.go:89] found id: ""
	I0907 00:55:57.653907   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:57.653967   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.658155   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:57.658219   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:57.688169   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:57.688195   46833 cri.go:89] found id: ""
	I0907 00:55:57.688203   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:57.688256   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.692208   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:57.692274   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:57.722477   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:57.722498   46833 cri.go:89] found id: ""
	I0907 00:55:57.722505   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:57.722548   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.726875   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:57.726926   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:57.768681   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:57.768709   46833 cri.go:89] found id: ""
	I0907 00:55:57.768718   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:57.768768   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.773562   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:57.773654   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:57.806133   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:57.806158   46833 cri.go:89] found id: ""
	I0907 00:55:57.806166   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:57.806222   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.810401   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:57.810446   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:57.840346   46833 cri.go:89] found id: ""
	I0907 00:55:57.840371   46833 logs.go:284] 0 containers: []
	W0907 00:55:57.840379   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:57.840384   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:57.840435   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:57.869978   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:57.869998   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:57.870002   46833 cri.go:89] found id: ""
	I0907 00:55:57.870008   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:57.870052   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.874945   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.878942   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:57.878964   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:58.015009   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:58.015035   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:58.063331   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:58.063365   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:58.098316   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:58.098343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:58.140312   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:58.140342   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:58.170471   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:58.170499   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:58.217775   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:58.217804   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:58.275681   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:58.275717   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:58.323629   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:58.323663   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:58.360608   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:58.360636   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:58.397158   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:58.397193   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:58.435395   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:58.435425   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:59.023632   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:59.023687   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:55.767692   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:58.262808   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:00.263787   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:59.246042   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.746441   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.540667   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:56:01.548176   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:56:01.549418   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:01.549443   46833 api_server.go:131] duration metric: took 3.969568684s to wait for apiserver health ...
	I0907 00:56:01.549451   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:01.549474   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:01.549546   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:01.579945   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:01.579975   46833 cri.go:89] found id: ""
	I0907 00:56:01.579985   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:56:01.580038   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.584609   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:01.584673   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:01.628626   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:01.628647   46833 cri.go:89] found id: ""
	I0907 00:56:01.628656   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:56:01.628711   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.633293   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:01.633362   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:01.663898   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.663923   46833 cri.go:89] found id: ""
	I0907 00:56:01.663932   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:56:01.663994   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.668130   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:01.668198   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:01.699021   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.699045   46833 cri.go:89] found id: ""
	I0907 00:56:01.699055   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:56:01.699107   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.703470   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:01.703536   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:01.740360   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:01.740387   46833 cri.go:89] found id: ""
	I0907 00:56:01.740396   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:56:01.740450   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.747366   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:01.747445   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:01.783175   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.783218   46833 cri.go:89] found id: ""
	I0907 00:56:01.783226   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:56:01.783267   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.787565   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:01.787628   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:01.822700   46833 cri.go:89] found id: ""
	I0907 00:56:01.822730   46833 logs.go:284] 0 containers: []
	W0907 00:56:01.822740   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:01.822747   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:01.822818   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:01.853909   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:01.853934   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:01.853938   46833 cri.go:89] found id: ""
	I0907 00:56:01.853945   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:56:01.853990   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.858209   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.862034   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:56:01.862053   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.902881   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:56:01.902915   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.937846   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:56:01.937882   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.993495   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:56:01.993526   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:02.029773   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:56:02.029810   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:02.076180   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:02.076210   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:02.133234   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:02.133268   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:02.278183   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:56:02.278209   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:02.325096   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:56:02.325125   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:02.362517   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:56:02.362542   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:02.393393   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:02.393430   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:02.950480   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:02.950521   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:02.967628   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:56:02.967658   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:05.533216   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:05.533249   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.533257   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.533264   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.533271   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.533276   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.533283   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.533292   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.533305   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.533315   46833 system_pods.go:74] duration metric: took 3.983859289s to wait for pod list to return data ...
	I0907 00:56:05.533327   46833 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:05.536806   46833 default_sa.go:45] found service account: "default"
	I0907 00:56:05.536833   46833 default_sa.go:55] duration metric: took 3.496147ms for default service account to be created ...
	I0907 00:56:05.536842   46833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:05.543284   46833 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:05.543310   46833 system_pods.go:89] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.543318   46833 system_pods.go:89] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.543325   46833 system_pods.go:89] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.543332   46833 system_pods.go:89] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.543337   46833 system_pods.go:89] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.543344   46833 system_pods.go:89] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.543355   46833 system_pods.go:89] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.543367   46833 system_pods.go:89] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.543377   46833 system_pods.go:126] duration metric: took 6.528914ms to wait for k8s-apps to be running ...
	I0907 00:56:05.543391   46833 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:05.543437   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:05.559581   46833 system_svc.go:56] duration metric: took 16.174514ms WaitForService to wait for kubelet.
	I0907 00:56:05.559613   46833 kubeadm.go:581] duration metric: took 4m23.660681176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:05.559638   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:05.564521   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:05.564552   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:05.564566   46833 node_conditions.go:105] duration metric: took 4.922449ms to run NodePressure ...
	I0907 00:56:05.564579   46833 start.go:228] waiting for startup goroutines ...
	I0907 00:56:05.564589   46833 start.go:233] waiting for cluster config update ...
	I0907 00:56:05.564609   46833 start.go:242] writing updated cluster config ...
	I0907 00:56:05.564968   46833 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:05.618906   46833 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:05.620461   46833 out.go:177] * Done! kubectl is now configured to use "embed-certs-546209" cluster and "default" namespace by default
	I0907 00:56:02.763702   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:05.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:04.246390   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:06.246925   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:07.762598   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:09.764581   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:08.746379   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:11.246764   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.263747   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.364712   47297 pod_ready.go:81] duration metric: took 4m0.00109115s waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:56:12.364763   47297 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:56:12.364776   47297 pod_ready.go:38] duration metric: took 4m3.209409487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:12.364799   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:12.364833   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:12.364891   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:12.416735   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:12.416760   47297 cri.go:89] found id: ""
	I0907 00:56:12.416767   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:12.416818   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.423778   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:12.423849   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:12.465058   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.465086   47297 cri.go:89] found id: ""
	I0907 00:56:12.465095   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:12.465152   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.471730   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:12.471793   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:12.508984   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.509005   47297 cri.go:89] found id: ""
	I0907 00:56:12.509017   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:12.509073   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.513689   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:12.513745   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:12.550233   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:12.550257   47297 cri.go:89] found id: ""
	I0907 00:56:12.550266   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:12.550325   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.556588   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:12.556665   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:12.598826   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:12.598853   47297 cri.go:89] found id: ""
	I0907 00:56:12.598862   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:12.598913   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.603710   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:12.603778   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:12.645139   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:12.645169   47297 cri.go:89] found id: ""
	I0907 00:56:12.645179   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:12.645236   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.650685   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:12.650755   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:12.686256   47297 cri.go:89] found id: ""
	I0907 00:56:12.686284   47297 logs.go:284] 0 containers: []
	W0907 00:56:12.686291   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:12.686297   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:12.686349   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:12.719614   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.719638   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:12.719645   47297 cri.go:89] found id: ""
	I0907 00:56:12.719655   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:12.719713   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.724842   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.728880   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:12.728899   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.771051   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:12.771081   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.812110   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:12.812140   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.847819   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:12.847845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:13.436674   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:13.436711   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:13.454385   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:13.454425   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:13.617809   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:13.617838   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:13.652209   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:13.652239   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:13.683939   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:13.683977   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:13.730116   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:13.730151   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:13.763253   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:13.763278   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:13.804890   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:13.804918   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:13.861822   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:13.861856   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.242461   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.788701806s)
	I0907 00:56:17.242546   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:17.259241   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:56:17.268943   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:56:17.278094   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:56:17.278138   46768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:56:17.342868   46768 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:56:17.342981   46768 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:56:17.519943   46768 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:56:17.520089   46768 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:56:17.520214   46768 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:56:17.714902   46768 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:56:13.247487   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:15.746162   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.748049   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.716739   46768 out.go:204]   - Generating certificates and keys ...
	I0907 00:56:17.716894   46768 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:56:17.717007   46768 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:56:17.717113   46768 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:56:17.717361   46768 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:56:17.717892   46768 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:56:17.718821   46768 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:56:17.719502   46768 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:56:17.719996   46768 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:56:17.720644   46768 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:56:17.721254   46768 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:56:17.721832   46768 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:56:17.721911   46768 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:56:17.959453   46768 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:56:18.029012   46768 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:56:18.146402   46768 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:56:18.309148   46768 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:56:18.309726   46768 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:56:18.312628   46768 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:56:18.315593   46768 out.go:204]   - Booting up control plane ...
	I0907 00:56:18.315744   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:56:18.315870   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:56:18.317157   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:56:18.336536   46768 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:56:18.336947   46768 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:56:18.337042   46768 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:56:18.472759   46768 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:56:16.415279   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:16.431021   47297 api_server.go:72] duration metric: took 4m14.6757965s to wait for apiserver process to appear ...
	I0907 00:56:16.431047   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:16.431086   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:16.431144   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:16.474048   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:16.474075   47297 cri.go:89] found id: ""
	I0907 00:56:16.474085   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:16.474141   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.478873   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:16.478956   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:16.512799   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.512817   47297 cri.go:89] found id: ""
	I0907 00:56:16.512824   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:16.512880   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.518717   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:16.518812   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:16.553996   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:16.554016   47297 cri.go:89] found id: ""
	I0907 00:56:16.554023   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:16.554066   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.559358   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:16.559422   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:16.598717   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:16.598739   47297 cri.go:89] found id: ""
	I0907 00:56:16.598746   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:16.598821   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.603704   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:16.603766   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:16.646900   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:16.646928   47297 cri.go:89] found id: ""
	I0907 00:56:16.646937   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:16.646995   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.651216   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:16.651287   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:16.681334   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:16.681361   47297 cri.go:89] found id: ""
	I0907 00:56:16.681374   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:16.681429   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.685963   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:16.686028   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:16.720214   47297 cri.go:89] found id: ""
	I0907 00:56:16.720243   47297 logs.go:284] 0 containers: []
	W0907 00:56:16.720253   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:16.720259   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:16.720316   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:16.756411   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:16.756437   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:16.756444   47297 cri.go:89] found id: ""
	I0907 00:56:16.756452   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:16.756512   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.762211   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.767635   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:16.767659   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:16.784092   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:16.784122   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:16.936817   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:16.936845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.979426   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:16.979455   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:17.009878   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:17.009912   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:17.048086   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:17.048113   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:17.103114   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:17.103156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:17.139125   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:17.139163   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:17.181560   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:17.181588   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:17.224815   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:17.224841   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:17.299438   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:17.299474   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.355165   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:17.355197   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:17.403781   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:17.403809   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:20.491060   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:56:20.498573   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:56:20.501753   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:20.501774   47297 api_server.go:131] duration metric: took 4.070720466s to wait for apiserver health ...
	I0907 00:56:20.501782   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:20.501807   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:20.501856   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:20.545524   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:20.545550   47297 cri.go:89] found id: ""
	I0907 00:56:20.545560   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:20.545616   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.552051   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:20.552120   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:20.593019   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:20.593041   47297 cri.go:89] found id: ""
	I0907 00:56:20.593049   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:20.593104   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.598430   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:20.598500   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:20.639380   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:20.639407   47297 cri.go:89] found id: ""
	I0907 00:56:20.639417   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:20.639507   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.645270   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:20.645342   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:20.247030   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:22.247132   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:20.684338   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:20.684368   47297 cri.go:89] found id: ""
	I0907 00:56:20.684378   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:20.684438   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.689465   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:20.689528   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:20.727854   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.727879   47297 cri.go:89] found id: ""
	I0907 00:56:20.727887   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:20.727938   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.733320   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:20.733389   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:20.776584   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:20.776607   47297 cri.go:89] found id: ""
	I0907 00:56:20.776614   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:20.776659   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.781745   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:20.781822   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:20.817720   47297 cri.go:89] found id: ""
	I0907 00:56:20.817746   47297 logs.go:284] 0 containers: []
	W0907 00:56:20.817756   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:20.817763   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:20.817819   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:20.857693   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.857716   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.857723   47297 cri.go:89] found id: ""
	I0907 00:56:20.857732   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:20.857788   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.862242   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.866469   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:20.866489   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.907476   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:20.907514   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.946383   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:20.946418   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.983830   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:20.983858   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:21.572473   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:21.572524   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:21.626465   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:21.626496   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:21.692455   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:21.692491   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:21.712600   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:21.712632   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:21.855914   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:21.855948   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:21.909035   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:21.909068   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:21.961286   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:21.961317   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:22.002150   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:22.002177   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:22.035129   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:22.035156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:24.592419   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:24.592455   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.592460   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.592464   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.592469   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.592473   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.592477   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.592483   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.592489   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.592494   47297 system_pods.go:74] duration metric: took 4.090707422s to wait for pod list to return data ...
	I0907 00:56:24.592501   47297 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:24.596106   47297 default_sa.go:45] found service account: "default"
	I0907 00:56:24.596127   47297 default_sa.go:55] duration metric: took 3.621408ms for default service account to be created ...
	I0907 00:56:24.596134   47297 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:24.601998   47297 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:24.602021   47297 system_pods.go:89] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.602026   47297 system_pods.go:89] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.602032   47297 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.602037   47297 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.602041   47297 system_pods.go:89] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.602046   47297 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.602054   47297 system_pods.go:89] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.602063   47297 system_pods.go:89] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.602069   47297 system_pods.go:126] duration metric: took 5.931212ms to wait for k8s-apps to be running ...
	I0907 00:56:24.602076   47297 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:24.602116   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:24.623704   47297 system_svc.go:56] duration metric: took 21.617229ms WaitForService to wait for kubelet.
	I0907 00:56:24.623734   47297 kubeadm.go:581] duration metric: took 4m22.868513281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:24.623754   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:24.628408   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:24.628435   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:24.628444   47297 node_conditions.go:105] duration metric: took 4.686272ms to run NodePressure ...
	I0907 00:56:24.628454   47297 start.go:228] waiting for startup goroutines ...
	I0907 00:56:24.628460   47297 start.go:233] waiting for cluster config update ...
	I0907 00:56:24.628469   47297 start.go:242] writing updated cluster config ...
	I0907 00:56:24.628735   47297 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:24.683237   47297 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:24.686336   47297 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-773466" cluster and "default" namespace by default
	I0907 00:56:26.977381   46768 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503998 seconds
	I0907 00:56:26.977624   46768 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:56:27.000116   46768 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:56:27.541598   46768 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:56:27.541809   46768 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-321164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:56:28.055045   46768 kubeadm.go:322] [bootstrap-token] Using token: 7x1950.9u417zcplp1q0xai
	I0907 00:56:24.247241   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:26.773163   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:28.056582   46768 out.go:204]   - Configuring RBAC rules ...
	I0907 00:56:28.056725   46768 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:56:28.065256   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:56:28.075804   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:56:28.081996   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:56:28.090825   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:56:28.097257   46768 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:56:28.114787   46768 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:56:28.337001   46768 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:56:28.476411   46768 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:56:28.479682   46768 kubeadm.go:322] 
	I0907 00:56:28.479784   46768 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:56:28.479799   46768 kubeadm.go:322] 
	I0907 00:56:28.479898   46768 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:56:28.479912   46768 kubeadm.go:322] 
	I0907 00:56:28.479943   46768 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:56:28.480046   46768 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:56:28.480143   46768 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:56:28.480163   46768 kubeadm.go:322] 
	I0907 00:56:28.480343   46768 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:56:28.480361   46768 kubeadm.go:322] 
	I0907 00:56:28.480431   46768 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:56:28.480450   46768 kubeadm.go:322] 
	I0907 00:56:28.480544   46768 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:56:28.480656   46768 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:56:28.480783   46768 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:56:28.480796   46768 kubeadm.go:322] 
	I0907 00:56:28.480924   46768 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:56:28.481024   46768 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:56:28.481034   46768 kubeadm.go:322] 
	I0907 00:56:28.481117   46768 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481203   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:56:28.481223   46768 kubeadm.go:322] 	--control-plane 
	I0907 00:56:28.481226   46768 kubeadm.go:322] 
	I0907 00:56:28.481346   46768 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:56:28.481355   46768 kubeadm.go:322] 
	I0907 00:56:28.481453   46768 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481572   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:56:28.482216   46768 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:56:28.482238   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:56:28.482248   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:56:28.484094   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:56:28.485597   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:56:28.537400   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:56:28.577654   46768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:56:28.577734   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.577747   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=no-preload-321164 minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.909178   46768 ops.go:34] apiserver oom_adj: -16
	I0907 00:56:28.920821   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.027812   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.627489   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:30.127554   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.246606   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:31.746291   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:30.627315   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.127759   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.627183   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.127488   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.627464   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.126850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.626901   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.126917   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.626850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:35.127788   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.747054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.747536   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.627454   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.126916   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.626926   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.126845   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.627579   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.126885   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.627849   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.127371   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.627929   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.127775   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.627392   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.760535   46768 kubeadm.go:1081] duration metric: took 12.182860946s to wait for elevateKubeSystemPrivileges.
	I0907 00:56:40.760574   46768 kubeadm.go:406] StartCluster complete in 5m29.209699324s
	I0907 00:56:40.760594   46768 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.760690   46768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:56:40.762820   46768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.763132   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:56:40.763152   46768 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:56:40.763245   46768 addons.go:69] Setting storage-provisioner=true in profile "no-preload-321164"
	I0907 00:56:40.763251   46768 addons.go:69] Setting default-storageclass=true in profile "no-preload-321164"
	I0907 00:56:40.763263   46768 addons.go:231] Setting addon storage-provisioner=true in "no-preload-321164"
	W0907 00:56:40.763271   46768 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:56:40.763272   46768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-321164"
	I0907 00:56:40.763314   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763357   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:56:40.763404   46768 addons.go:69] Setting metrics-server=true in profile "no-preload-321164"
	I0907 00:56:40.763421   46768 addons.go:231] Setting addon metrics-server=true in "no-preload-321164"
	W0907 00:56:40.763428   46768 addons.go:240] addon metrics-server should already be in state true
	I0907 00:56:40.763464   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763718   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763747   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763772   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763793   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763811   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763833   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.781727   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0907 00:56:40.781738   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0907 00:56:40.781741   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0907 00:56:40.782188   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782279   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782332   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782702   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782724   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782856   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782873   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782879   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782894   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.783096   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783306   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783354   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783531   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.783686   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783717   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.783905   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783949   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.801244   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0907 00:56:40.801534   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0907 00:56:40.801961   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802064   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802509   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802529   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802673   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802689   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802942   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803153   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.803218   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803365   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.804775   46768 addons.go:231] Setting addon default-storageclass=true in "no-preload-321164"
	W0907 00:56:40.804798   46768 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:56:40.804828   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.805191   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.805490   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.807809   46768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:56:40.806890   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.809154   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.809188   46768 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:40.809199   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:56:40.809215   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809249   46768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:56:40.810543   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:56:40.810557   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:56:40.810570   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809485   46768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-321164" context rescaled to 1 replicas
	I0907 00:56:40.810637   46768 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:56:40.813528   46768 out.go:177] * Verifying Kubernetes components...
	I0907 00:56:38.246743   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.747015   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.814976   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:40.817948   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818029   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818080   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818100   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818117   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818137   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818156   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818175   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818282   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818348   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818462   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.818676   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.827224   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0907 00:56:40.827578   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.828106   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.828122   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.828464   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.829012   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.829043   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.843423   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0907 00:56:40.843768   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.844218   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.844236   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.844529   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.844735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.846265   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.846489   46768 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:40.846506   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:56:40.846525   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.849325   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849666   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.849704   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849897   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.850103   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.850251   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.850397   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.965966   46768 node_ready.go:35] waiting up to 6m0s for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.966030   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:56:40.997127   46768 node_ready.go:49] node "no-preload-321164" has status "Ready":"True"
	I0907 00:56:40.997149   46768 node_ready.go:38] duration metric: took 31.151467ms waiting for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.997158   46768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:41.010753   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:41.011536   46768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:41.022410   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:56:41.022431   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:56:41.051599   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:41.119566   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:56:41.119594   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:56:41.228422   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:41.228443   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:56:41.321420   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:42.776406   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810334575s)
	I0907 00:56:42.776435   46768 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 00:56:43.385184   46768 pod_ready.go:102] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:43.446190   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435398332s)
	I0907 00:56:43.446240   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.446248   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3946112s)
	I0907 00:56:43.446255   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449355   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449362   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449377   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.449389   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.449406   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449732   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449771   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449787   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450189   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450216   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.450653   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.450672   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450682   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450691   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451532   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.451597   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451619   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451635   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.451648   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451869   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451885   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451895   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689511   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.368045812s)
	I0907 00:56:43.689565   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.689579   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.689952   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.689963   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689974   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.689991   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.690001   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.690291   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.690307   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.690309   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.690322   46768 addons.go:467] Verifying addon metrics-server=true in "no-preload-321164"
	I0907 00:56:43.693105   46768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:56:43.694562   46768 addons.go:502] enable addons completed in 2.931409197s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:56:45.310723   46768 pod_ready.go:92] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.310742   46768 pod_ready.go:81] duration metric: took 4.299181671s waiting for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.310753   46768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316350   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.316373   46768 pod_ready.go:81] duration metric: took 5.614264ms waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316385   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321183   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.321205   46768 pod_ready.go:81] duration metric: took 4.811919ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321216   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326279   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.326297   46768 pod_ready.go:81] duration metric: took 5.0741ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326308   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332665   46768 pod_ready.go:92] pod "kube-proxy-st6n8" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.332687   46768 pod_ready.go:81] duration metric: took 6.372253ms waiting for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332697   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708023   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.708044   46768 pod_ready.go:81] duration metric: took 375.339873ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708051   46768 pod_ready.go:38] duration metric: took 4.710884592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:45.708065   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:45.708106   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:45.725929   46768 api_server.go:72] duration metric: took 4.915250734s to wait for apiserver process to appear ...
	I0907 00:56:45.725950   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:45.725964   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:56:45.731998   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:56:45.733492   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:45.733507   46768 api_server.go:131] duration metric: took 7.552661ms to wait for apiserver health ...
	I0907 00:56:45.733514   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:45.911337   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:45.911374   46768 system_pods.go:61] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:45.911383   46768 system_pods.go:61] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:45.911389   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:45.911397   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:45.911403   46768 system_pods.go:61] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:45.911410   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:45.911421   46768 system_pods.go:61] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:45.911435   46768 system_pods.go:61] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:45.911443   46768 system_pods.go:74] duration metric: took 177.923008ms to wait for pod list to return data ...
	I0907 00:56:45.911455   46768 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:46.107121   46768 default_sa.go:45] found service account: "default"
	I0907 00:56:46.107149   46768 default_sa.go:55] duration metric: took 195.685496ms for default service account to be created ...
	I0907 00:56:46.107159   46768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:46.314551   46768 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:46.314588   46768 system_pods.go:89] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:46.314596   46768 system_pods.go:89] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:46.314603   46768 system_pods.go:89] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:46.314611   46768 system_pods.go:89] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:46.314618   46768 system_pods.go:89] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:46.314624   46768 system_pods.go:89] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:46.314634   46768 system_pods.go:89] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:46.314645   46768 system_pods.go:89] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:46.314653   46768 system_pods.go:126] duration metric: took 207.48874ms to wait for k8s-apps to be running ...
	I0907 00:56:46.314663   46768 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:46.314713   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:46.331286   46768 system_svc.go:56] duration metric: took 16.613382ms WaitForService to wait for kubelet.
	I0907 00:56:46.331316   46768 kubeadm.go:581] duration metric: took 5.520640777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:46.331342   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:46.507374   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:46.507398   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:46.507406   46768 node_conditions.go:105] duration metric: took 176.059527ms to run NodePressure ...
	I0907 00:56:46.507417   46768 start.go:228] waiting for startup goroutines ...
	I0907 00:56:46.507422   46768 start.go:233] waiting for cluster config update ...
	I0907 00:56:46.507433   46768 start.go:242] writing updated cluster config ...
	I0907 00:56:46.507728   46768 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:46.559712   46768 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:46.561693   46768 out.go:177] * Done! kubectl is now configured to use "no-preload-321164" cluster and "default" namespace by default
	I0907 00:56:43.245531   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:45.746168   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:48.247228   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:50.746605   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:52.748264   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:55.246186   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:57.746658   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:00.245358   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:02.246373   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:04.746154   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:07.245583   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:09.246215   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:11.247141   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.247249   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.440321   46354 pod_ready.go:81] duration metric: took 4m0.000811237s waiting for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	E0907 00:57:13.440352   46354 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:57:13.440368   46354 pod_ready.go:38] duration metric: took 4m1.198343499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:13.440395   46354 kubeadm.go:640] restartCluster took 5m7.071390852s
	W0907 00:57:13.440463   46354 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:57:13.440538   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:57:26.505313   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.064737983s)
	I0907 00:57:26.505392   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:26.521194   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:57:26.530743   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:57:26.540431   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:57:26.540473   46354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0907 00:57:26.744360   46354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:57:39.131760   46354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0907 00:57:39.131857   46354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:57:39.131964   46354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:57:39.132110   46354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:57:39.132226   46354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:57:39.132360   46354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:57:39.132501   46354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:57:39.132573   46354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0907 00:57:39.132654   46354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:57:39.134121   46354 out.go:204]   - Generating certificates and keys ...
	I0907 00:57:39.134212   46354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:57:39.134313   46354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:57:39.134422   46354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:57:39.134501   46354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:57:39.134605   46354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:57:39.134688   46354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:57:39.134801   46354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:57:39.134902   46354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:57:39.135010   46354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:57:39.135121   46354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:57:39.135169   46354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:57:39.135241   46354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:57:39.135308   46354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:57:39.135393   46354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:57:39.135512   46354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:57:39.135599   46354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:57:39.135700   46354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:57:39.137273   46354 out.go:204]   - Booting up control plane ...
	I0907 00:57:39.137369   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:57:39.137458   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:57:39.137561   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:57:39.137677   46354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:57:39.137888   46354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:57:39.138013   46354 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503675 seconds
	I0907 00:57:39.138137   46354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:57:39.138249   46354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:57:39.138297   46354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:57:39.138402   46354 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-940806 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0907 00:57:39.138453   46354 kubeadm.go:322] [bootstrap-token] Using token: nfcsq1.o4ef3s2bthacz2l0
	I0907 00:57:39.139754   46354 out.go:204]   - Configuring RBAC rules ...
	I0907 00:57:39.139848   46354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:57:39.139970   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:57:39.140112   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:57:39.140245   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:57:39.140327   46354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:57:39.140393   46354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:57:39.140442   46354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:57:39.140452   46354 kubeadm.go:322] 
	I0907 00:57:39.140525   46354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:57:39.140533   46354 kubeadm.go:322] 
	I0907 00:57:39.140628   46354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:57:39.140635   46354 kubeadm.go:322] 
	I0907 00:57:39.140665   46354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:57:39.140752   46354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:57:39.140822   46354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:57:39.140834   46354 kubeadm.go:322] 
	I0907 00:57:39.140896   46354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:57:39.140960   46354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:57:39.141043   46354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:57:39.141051   46354 kubeadm.go:322] 
	I0907 00:57:39.141159   46354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0907 00:57:39.141262   46354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:57:39.141276   46354 kubeadm.go:322] 
	I0907 00:57:39.141407   46354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141536   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:57:39.141568   46354 kubeadm.go:322]     --control-plane 	  
	I0907 00:57:39.141575   46354 kubeadm.go:322] 
	I0907 00:57:39.141657   46354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:57:39.141665   46354 kubeadm.go:322] 
	I0907 00:57:39.141730   46354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141832   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:57:39.141851   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:57:39.141863   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:57:39.143462   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:57:39.144982   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:57:39.158663   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:57:39.180662   46354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:57:39.180747   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.180749   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=old-k8s-version-940806 minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.208969   46354 ops.go:34] apiserver oom_adj: -16
	I0907 00:57:39.426346   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.545090   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.162127   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.662172   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.162069   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.662164   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.162355   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.662152   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.161862   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.661532   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.162130   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.661948   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.162260   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.662082   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.162345   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.662378   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.162307   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.662556   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.162204   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.661938   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.161608   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.662198   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.162016   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.662392   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.162303   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.662393   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.162510   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.662195   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.162302   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.662427   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.162085   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.662218   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.779895   46354 kubeadm.go:1081] duration metric: took 15.599222217s to wait for elevateKubeSystemPrivileges.
	I0907 00:57:54.779927   46354 kubeadm.go:406] StartCluster complete in 5m48.456500898s
	I0907 00:57:54.779949   46354 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.780038   46354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:57:54.782334   46354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.782624   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:57:54.782772   46354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:57:54.782871   46354 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782890   46354 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782900   46354 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-940806"
	W0907 00:57:54.782908   46354 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:57:54.782918   46354 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-940806"
	W0907 00:57:54.782926   46354 addons.go:240] addon metrics-server should already be in state true
	I0907 00:57:54.782880   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:57:54.782889   46354 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-940806"
	I0907 00:57:54.783049   46354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-940806"
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.783499   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783500   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783528   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783533   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783571   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783599   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.802026   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0907 00:57:54.802487   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803108   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.803131   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0907 00:57:54.803512   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.803674   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803710   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.804184   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.804215   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.804239   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804259   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804311   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804327   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804569   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804668   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804832   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.805067   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.805094   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.821660   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0907 00:57:54.822183   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.822694   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.822720   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.823047   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.823247   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.823707   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0907 00:57:54.824135   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.825021   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.825046   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.825082   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.827174   46354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:57:54.825428   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.828768   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:57:54.828787   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:57:54.828808   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.829357   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.831479   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.833553   46354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:57:54.832288   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.832776   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.834996   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.835038   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.835055   46354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:54.835067   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:57:54.835083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.835140   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.835307   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.835410   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.836403   46354 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-940806"
	W0907 00:57:54.836424   46354 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:57:54.836451   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.836822   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.836851   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.838476   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.838920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.838951   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.839218   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.839540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.839719   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.839896   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.854883   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0907 00:57:54.855311   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.855830   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.855858   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.856244   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.856713   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.856737   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.872940   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0907 00:57:54.873442   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.874030   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.874057   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.874433   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.874665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.876568   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.876928   46354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:54.876947   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:57:54.876966   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.879761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.879993   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.880015   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.880248   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.880424   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.880591   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.880694   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.933915   46354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-940806" context rescaled to 1 replicas
	I0907 00:57:54.933965   46354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:57:54.936214   46354 out.go:177] * Verifying Kubernetes components...
	I0907 00:57:54.937844   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:55.011087   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:57:55.011114   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:57:55.020666   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:55.038411   46354 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.038474   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:57:55.066358   46354 node_ready.go:49] node "old-k8s-version-940806" has status "Ready":"True"
	I0907 00:57:55.066382   46354 node_ready.go:38] duration metric: took 27.94281ms waiting for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.066393   46354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:55.076936   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	I0907 00:57:55.118806   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:57:55.118835   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:57:55.145653   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:55.158613   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:55.158636   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:57:55.214719   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:56.905329   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.884630053s)
	I0907 00:57:56.905379   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905377   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866875113s)
	I0907 00:57:56.905392   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905403   46354 start.go:901] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0907 00:57:56.905417   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759735751s)
	I0907 00:57:56.905441   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905455   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905794   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905842   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905858   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.905878   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.905895   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905910   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905963   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906013   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906037   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906047   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906286   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906340   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906293   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906325   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906436   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906449   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906459   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906630   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906729   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906732   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906749   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.087889   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.873113752s)
	I0907 00:57:57.087946   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.087979   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.088366   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:57.089849   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.089880   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.089892   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.089899   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.090126   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.090146   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.090155   46354 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-940806"
	I0907 00:57:57.093060   46354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:57:57.094326   46354 addons.go:502] enable addons completed in 2.311555161s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:57:57.115594   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:59.609005   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:58:00.605260   46354 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605285   46354 pod_ready.go:81] duration metric: took 5.528319392s waiting for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	E0907 00:58:00.605296   46354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605305   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.623994   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.624020   46354 pod_ready.go:81] duration metric: took 2.01870868s waiting for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.624039   46354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629264   46354 pod_ready.go:92] pod "kube-proxy-bt454" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.629282   46354 pod_ready.go:81] duration metric: took 5.236562ms waiting for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629288   46354 pod_ready.go:38] duration metric: took 7.562884581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:58:02.629301   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:58:02.629339   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:58:02.644494   46354 api_server.go:72] duration metric: took 7.710498225s to wait for apiserver process to appear ...
	I0907 00:58:02.644515   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:58:02.644529   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:58:02.651352   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:58:02.652147   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:58:02.652186   46354 api_server.go:131] duration metric: took 7.646808ms to wait for apiserver health ...
	I0907 00:58:02.652199   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:58:02.656482   46354 system_pods.go:59] 4 kube-system pods found
	I0907 00:58:02.656506   46354 system_pods.go:61] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.656513   46354 system_pods.go:61] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.656524   46354 system_pods.go:61] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.656534   46354 system_pods.go:61] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.656541   46354 system_pods.go:74] duration metric: took 4.333279ms to wait for pod list to return data ...
	I0907 00:58:02.656553   46354 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:58:02.659079   46354 default_sa.go:45] found service account: "default"
	I0907 00:58:02.659102   46354 default_sa.go:55] duration metric: took 2.543265ms for default service account to be created ...
	I0907 00:58:02.659110   46354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:58:02.663028   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.663050   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.663058   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.663069   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.663077   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.663094   46354 retry.go:31] will retry after 205.506153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:02.874261   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.874291   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.874299   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.874309   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.874318   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.874335   46354 retry.go:31] will retry after 265.617543ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.145704   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.145736   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.145745   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.145755   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.145764   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.145782   46354 retry.go:31] will retry after 459.115577ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.610425   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.610458   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.610466   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.610474   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.610482   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.610498   46354 retry.go:31] will retry after 411.97961ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.026961   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.026992   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.026997   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.027004   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.027011   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.027024   46354 retry.go:31] will retry after 633.680519ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.665840   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.665868   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.665877   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.665889   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.665899   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.665916   46354 retry.go:31] will retry after 680.962565ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:05.352621   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:05.352644   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:05.352652   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:05.352699   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:05.352710   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:05.352725   46354 retry.go:31] will retry after 939.996523ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:06.298740   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:06.298765   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:06.298770   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:06.298791   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:06.298803   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:06.298820   46354 retry.go:31] will retry after 1.103299964s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:07.407728   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:07.407753   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:07.407758   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:07.407766   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:07.407772   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:07.407785   46354 retry.go:31] will retry after 1.13694803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:08.550198   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:08.550228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:08.550236   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:08.550245   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:08.550252   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:08.550269   46354 retry.go:31] will retry after 2.240430665s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:10.796203   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:10.796228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:10.796233   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:10.796240   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:10.796246   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:10.796261   46354 retry.go:31] will retry after 2.183105097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:12.985467   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:12.985491   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:12.985500   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:12.985510   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:12.985518   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:12.985535   46354 retry.go:31] will retry after 2.428546683s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:15.419138   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:15.419163   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:15.419168   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:15.419174   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:15.419181   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:15.419195   46354 retry.go:31] will retry after 2.778392129s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:18.202590   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:18.202621   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:18.202629   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:18.202639   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:18.202648   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:18.202670   46354 retry.go:31] will retry after 5.204092587s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:23.412120   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:23.412144   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:23.412157   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:23.412164   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:23.412171   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:23.412187   46354 retry.go:31] will retry after 6.095121382s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:29.513424   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:29.513449   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:29.513454   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:29.513462   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:29.513468   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:29.513482   46354 retry.go:31] will retry after 6.142679131s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:35.662341   46354 system_pods.go:86] 5 kube-system pods found
	I0907 00:58:35.662367   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:35.662372   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:35.662377   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Pending
	I0907 00:58:35.662383   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:35.662390   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:35.662408   46354 retry.go:31] will retry after 10.800349656s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:46.468817   46354 system_pods.go:86] 6 kube-system pods found
	I0907 00:58:46.468845   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:46.468854   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:46.468859   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:46.468867   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:46.468876   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:46.468884   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:46.468901   46354 retry.go:31] will retry after 10.570531489s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:58:57.047784   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:58:57.047865   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:57.047892   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:57.048256   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Pending
	I0907 00:58:57.048272   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Pending
	I0907 00:58:57.048279   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:57.048286   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:57.048301   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:57.048315   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:57.048345   46354 retry.go:31] will retry after 14.06926028s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:59:11.124216   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:59:11.124242   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:59:11.124248   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:59:11.124252   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Running
	I0907 00:59:11.124257   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Running
	I0907 00:59:11.124261   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:59:11.124265   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:59:11.124272   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:59:11.124276   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:59:11.124283   46354 system_pods.go:126] duration metric: took 1m8.465167722s to wait for k8s-apps to be running ...
	I0907 00:59:11.124289   46354 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:59:11.124328   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:59:11.140651   46354 system_svc.go:56] duration metric: took 16.348641ms WaitForService to wait for kubelet.
	I0907 00:59:11.140686   46354 kubeadm.go:581] duration metric: took 1m16.206690472s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:59:11.140714   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:59:11.144185   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:59:11.144212   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:59:11.144224   46354 node_conditions.go:105] duration metric: took 3.50462ms to run NodePressure ...
	I0907 00:59:11.144235   46354 start.go:228] waiting for startup goroutines ...
	I0907 00:59:11.144244   46354 start.go:233] waiting for cluster config update ...
	I0907 00:59:11.144259   46354 start.go:242] writing updated cluster config ...
	I0907 00:59:11.144547   46354 ssh_runner.go:195] Run: rm -f paused
	I0907 00:59:11.194224   46354 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0907 00:59:11.196420   46354 out.go:177] 
	W0907 00:59:11.197939   46354 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0907 00:59:11.199287   46354 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0907 00:59:11.200770   46354 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-940806" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:03 UTC, ends at Thu 2023-09-07 01:05:07 UTC. --
	Sep 07 01:05:06 embed-certs-546209 crio[728]: time="2023-09-07 01:05:06.617799048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3bfa8314-ad39-478a-b07a-76aa6a7e1189 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.113227359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a39c620a-85e7-4005-8d12-0c7153ea3a72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.113350231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a39c620a-85e7-4005-8d12-0c7153ea3a72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.113754406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a39c620a-85e7-4005-8d12-0c7153ea3a72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.154177950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9649605e-8a91-4dcf-8baa-5eece8d476a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.154267546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9649605e-8a91-4dcf-8baa-5eece8d476a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.154551108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9649605e-8a91-4dcf-8baa-5eece8d476a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.191843406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e8ff49c-303e-4a22-bc4e-4e4ebc9fb3f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.191934719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e8ff49c-303e-4a22-bc4e-4e4ebc9fb3f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.192161562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e8ff49c-303e-4a22-bc4e-4e4ebc9fb3f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.234124031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08954f6a-7725-4b4f-a2e3-c7d3a566b084 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.234220629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08954f6a-7725-4b4f-a2e3-c7d3a566b084 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.234424211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08954f6a-7725-4b4f-a2e3-c7d3a566b084 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.273796527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92a979b9-9283-40a1-8847-20aaf589a4ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.273887191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92a979b9-9283-40a1-8847-20aaf589a4ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.274138240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92a979b9-9283-40a1-8847-20aaf589a4ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.316299954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=375b6b79-1bc3-4945-923d-b369edf66c7b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.316385551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=375b6b79-1bc3-4945-923d-b369edf66c7b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.316600690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=375b6b79-1bc3-4945-923d-b369edf66c7b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.355180254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e178487e-fe2c-42f3-92d6-4de3c7a6c1bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.355270724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e178487e-fe2c-42f3-92d6-4de3c7a6c1bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.355514280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e178487e-fe2c-42f3-92d6-4de3c7a6c1bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.393284062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=296d212c-6564-47f9-a1e4-b8cb879017cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.393372217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=296d212c-6564-47f9-a1e4-b8cb879017cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:07 embed-certs-546209 crio[728]: time="2023-09-07 01:05:07.393726759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=296d212c-6564-47f9-a1e4-b8cb879017cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	3e19fc62694d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   e41590ec7641e
	db7e689ee42b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7548386602c35
	855a29ec437be       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   e59f871b4b994
	9094ebc4a03d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   e41590ec7641e
	6af4cd8e3e587       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   8f9b0f503434d
	9177fe24226fe       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   1c914348c6421
	3fee1540272d1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   1a8caaf07d65b
	22bdcb2b7b02d       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   f6811bf1cfb84
	3bfeea0ca797b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   ec88ecc5dab6c
	
	* 
	* ==> coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60886 - 8531 "HINFO IN 2783813726071619599.7588099067166792090. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015187174s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-546209
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-546209
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=embed-certs-546209
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_43_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:43:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-546209
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:05:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:02:20 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:02:20 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:02:20 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:02:20 +0000   Thu, 07 Sep 2023 00:51:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.242
	  Hostname:    embed-certs-546209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 63417a3b59c148f19ad0029f51d9917d
	  System UUID:                63417a3b-59c1-48f1-9ad0-029f51d9917d
	  Boot ID:                    fea4dfc2-0ceb-4ce4-9108-1c291a715af7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-vrgm9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-546209                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-546209             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-546209    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-47255                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-546209             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-d7nxw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node embed-certs-546209 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-546209 event: Registered Node embed-certs-546209 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-546209 event: Registered Node embed-certs-546209 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.416100] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep 7 00:51] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153289] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.562531] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.754231] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.103119] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.163550] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.124921] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.237307] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +17.334121] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +15.381252] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] <==
	* {"level":"warn","ts":"2023-09-07T00:51:41.83714Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.159419088s","expected-duration":"1s"}
	{"level":"info","ts":"2023-09-07T00:51:41.838373Z","caller":"traceutil/trace.go:171","msg":"trace[1470024363] linearizableReadLoop","detail":"{readStateIndex:567; appliedIndex:566; }","duration":"1.042601225s","start":"2023-09-07T00:51:40.795756Z","end":"2023-09-07T00:51:41.838357Z","steps":["trace[1470024363] 'read index received'  (duration: 1.042429701s)","trace[1470024363] 'applied index is now lower than readState.Index'  (duration: 170.697µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-07T00:51:41.838737Z","caller":"traceutil/trace.go:171","msg":"trace[201554928] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"1.161148943s","start":"2023-09-07T00:51:40.677568Z","end":"2023-09-07T00:51:41.838717Z","steps":["trace[201554928] 'process raft request'  (duration: 1.160680071s)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:41.843524Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:51:40.677554Z","time spent":"1.165919512s","remote":"127.0.0.1:59492","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3548,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:500 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3494 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2023-09-07T00:51:41.843214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.018314774s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2023-09-07T00:51:41.844126Z","caller":"traceutil/trace.go:171","msg":"trace[422626843] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:1; response_revision:537; }","duration":"1.019241601s","start":"2023-09-07T00:51:40.824872Z","end":"2023-09-07T00:51:41.844114Z","steps":["trace[422626843] 'agreement among raft nodes before linearized reading'  (duration: 1.018273297s)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:41.844214Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:51:40.824856Z","time spent":"1.019343255s","remote":"127.0.0.1:59534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":1015,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" "}
	{"level":"warn","ts":"2023-09-07T00:51:41.843282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.047550719s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3975"}
	{"level":"info","ts":"2023-09-07T00:51:41.844581Z","caller":"traceutil/trace.go:171","msg":"trace[1213076851] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:537; }","duration":"1.048839506s","start":"2023-09-07T00:51:40.795724Z","end":"2023-09-07T00:51:41.844563Z","steps":["trace[1213076851] 'agreement among raft nodes before linearized reading'  (duration: 1.047520609s)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:41.843332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"660.124257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"warn","ts":"2023-09-07T00:51:41.843363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"666.589713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-5dd5756b68-vrgm9.1782779663e402fe\" ","response":"range_response_count:1 size:812"}
	{"level":"warn","ts":"2023-09-07T00:51:41.846799Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:51:40.795706Z","time spent":"1.051078379s","remote":"127.0.0.1:59552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3998,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"info","ts":"2023-09-07T00:51:41.846928Z","caller":"traceutil/trace.go:171","msg":"trace[1270440029] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:537; }","duration":"663.716597ms","start":"2023-09-07T00:51:41.1832Z","end":"2023-09-07T00:51:41.846916Z","steps":["trace[1270440029] 'agreement among raft nodes before linearized reading'  (duration: 660.097301ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:41.84729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:51:41.183188Z","time spent":"664.090795ms","remote":"127.0.0.1:59472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":787,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"info","ts":"2023-09-07T00:51:41.846985Z","caller":"traceutil/trace.go:171","msg":"trace[1851538186] range","detail":"{range_begin:/registry/events/kube-system/coredns-5dd5756b68-vrgm9.1782779663e402fe; range_end:; response_count:1; response_revision:537; }","duration":"670.208865ms","start":"2023-09-07T00:51:41.176768Z","end":"2023-09-07T00:51:41.846977Z","steps":["trace[1851538186] 'agreement among raft nodes before linearized reading'  (duration: 666.574311ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:41.847787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T00:51:41.176754Z","time spent":"671.022022ms","remote":"127.0.0.1:59464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":835,"request content":"key:\"/registry/events/kube-system/coredns-5dd5756b68-vrgm9.1782779663e402fe\" "}
	{"level":"info","ts":"2023-09-07T00:51:42.120855Z","caller":"traceutil/trace.go:171","msg":"trace[1393581746] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:567; }","duration":"235.931158ms","start":"2023-09-07T00:51:41.884905Z","end":"2023-09-07T00:51:42.120836Z","steps":["trace[1393581746] 'read index received'  (duration: 231.566045ms)","trace[1393581746] 'applied index is now lower than readState.Index'  (duration: 4.364526ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-07T00:51:42.121582Z","caller":"traceutil/trace.go:171","msg":"trace[1535644854] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"253.403273ms","start":"2023-09-07T00:51:41.868157Z","end":"2023-09-07T00:51:42.12156Z","steps":["trace[1535644854] 'process raft request'  (duration: 248.374333ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:51:42.123243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.339912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2023-09-07T00:51:42.123351Z","caller":"traceutil/trace.go:171","msg":"trace[527385643] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:538; }","duration":"238.46326ms","start":"2023-09-07T00:51:41.884876Z","end":"2023-09-07T00:51:42.123339Z","steps":["trace[527385643] 'agreement among raft nodes before linearized reading'  (duration: 237.2016ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T00:52:04.129051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.537985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-d7nxw\" ","response":"range_response_count:1 size:4027"}
	{"level":"info","ts":"2023-09-07T00:52:04.129434Z","caller":"traceutil/trace.go:171","msg":"trace[1159191600] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-d7nxw; range_end:; response_count:1; response_revision:582; }","duration":"279.992473ms","start":"2023-09-07T00:52:03.849419Z","end":"2023-09-07T00:52:04.129412Z","steps":["trace[1159191600] 'range keys from in-memory index tree'  (duration: 279.39993ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:01:35.003125Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":827}
	{"level":"info","ts":"2023-09-07T01:01:35.005604Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":827,"took":"2.170911ms","hash":3721369683}
	{"level":"info","ts":"2023-09-07T01:01:35.005727Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3721369683,"revision":827,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  01:05:07 up 14 min,  0 users,  load average: 0.50, 0.42, 0.28
	Linux embed-certs-546209 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] <==
	* E0907 01:01:38.173712       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:01:38.173719       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0907 01:01:38.173896       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:01:38.175319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:02:36.973494       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:02:36.973526       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:02:38.174929       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:38.175101       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:02:38.175164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:02:38.176123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:38.176287       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:02:38.176335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:03:36.973343       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:03:36.973431       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0907 01:04:36.973233       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:04:36.973309       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:04:38.176095       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:38.176333       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:04:38.176370       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:04:38.176468       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:38.176544       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:04:38.177717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] <==
	* I0907 00:59:20.602967       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 00:59:50.111239       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 00:59:50.613267       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:00:20.121812       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:00:20.623090       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:00:50.129847       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:00:50.634233       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:20.134907       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:20.642157       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:50.142193       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:50.651950       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:02:20.148410       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:20.662740       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:02:38.508625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="292.181µs"
	E0907 01:02:50.154738       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:50.672223       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:02:52.503212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.342µs"
	E0907 01:03:20.160837       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:20.681550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:03:50.167858       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:50.693115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:20.174081       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:20.705181       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:50.180505       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:50.713934       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] <==
	* I0907 00:51:39.087480       1 server_others.go:69] "Using iptables proxy"
	I0907 00:51:39.185737       1 node.go:141] Successfully retrieved node IP: 192.168.50.242
	I0907 00:51:39.246818       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:51:39.246936       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:51:39.251491       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:51:39.251613       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:51:39.252587       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:51:39.252747       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:39.253869       1 config.go:188] "Starting service config controller"
	I0907 00:51:39.254010       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:51:39.254047       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:51:39.254064       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:51:39.254568       1 config.go:315] "Starting node config controller"
	I0907 00:51:39.254606       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:51:39.354931       1 shared_informer.go:318] Caches are synced for node config
	I0907 00:51:39.355127       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:51:39.355213       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] <==
	* I0907 00:51:34.688500       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:51:37.096877       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:51:37.097090       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:51:37.100221       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:51:37.100244       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:51:37.180276       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:51:37.180340       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:37.187145       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:51:37.187196       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:51:37.190428       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:51:37.190521       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:51:37.288160       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:03 UTC, ends at Thu 2023-09-07 01:05:07 UTC. --
	Sep 07 01:02:27 embed-certs-546209 kubelet[935]: E0907 01:02:27.499115     935 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z55c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-d7nxw_kube-system(92e557f4-3c56-49f4-931c-0e64fa3cb1df): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:02:27 embed-certs-546209 kubelet[935]: E0907 01:02:27.499152     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:02:30 embed-certs-546209 kubelet[935]: E0907 01:02:30.518533     935 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:02:30 embed-certs-546209 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:02:30 embed-certs-546209 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:02:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:02:38 embed-certs-546209 kubelet[935]: E0907 01:02:38.488183     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:02:52 embed-certs-546209 kubelet[935]: E0907 01:02:52.487456     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:03:03 embed-certs-546209 kubelet[935]: E0907 01:03:03.487168     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:03:16 embed-certs-546209 kubelet[935]: E0907 01:03:16.487810     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:03:30 embed-certs-546209 kubelet[935]: E0907 01:03:30.521063     935 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:03:30 embed-certs-546209 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:03:30 embed-certs-546209 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:03:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:03:31 embed-certs-546209 kubelet[935]: E0907 01:03:31.487105     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:03:46 embed-certs-546209 kubelet[935]: E0907 01:03:46.488055     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:04:01 embed-certs-546209 kubelet[935]: E0907 01:04:01.486758     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:04:14 embed-certs-546209 kubelet[935]: E0907 01:04:14.486938     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:04:29 embed-certs-546209 kubelet[935]: E0907 01:04:29.486249     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:04:30 embed-certs-546209 kubelet[935]: E0907 01:04:30.517834     935 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:04:30 embed-certs-546209 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:04:30 embed-certs-546209 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:04:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:04:40 embed-certs-546209 kubelet[935]: E0907 01:04:40.488427     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:04:55 embed-certs-546209 kubelet[935]: E0907 01:04:55.487166     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	
	* 
	* ==> storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] <==
	* I0907 00:52:09.890019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:52:09.904609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:52:09.904784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:52:09.921291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:52:09.921520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744!
	I0907 00:52:09.924209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12604b25-c97b-477b-a25c-0fcb9eaf879f", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744 became leader
	I0907 00:52:10.021966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744!
	
	* 
	* ==> storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] <==
	* I0907 00:51:39.141495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0907 00:52:09.165406       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546209 -n embed-certs-546209
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-546209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-d7nxw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw: exit status 1 (66.960591ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d7nxw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
E0907 01:05:25.164170   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:05:25.28740669 +0000 UTC m=+5264.008862240
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-773466 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-773466 logs -n 25: (1.625978384s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-386196                              | cert-expiration-386196       | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-940806        | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC | 07 Sep 23 00:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:48:30.668905   47297 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:48:30.669040   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669051   47297 out.go:309] Setting ErrFile to fd 2...
	I0907 00:48:30.669055   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669275   47297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:48:30.669849   47297 out.go:303] Setting JSON to false
	I0907 00:48:30.670802   47297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1694042256,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:48:30.670876   47297 start.go:138] virtualization: kvm guest
	I0907 00:48:30.673226   47297 out.go:177] * [default-k8s-diff-port-773466] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:48:30.675018   47297 notify.go:220] Checking for updates...
	I0907 00:48:30.675022   47297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:48:30.676573   47297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:48:30.677899   47297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:48:30.679390   47297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:48:30.680678   47297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:48:30.682324   47297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:48:30.684199   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:48:30.684737   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.684791   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.699093   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0907 00:48:30.699446   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.699961   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.699981   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.700356   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.700531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.700779   47297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:48:30.701065   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.701099   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.715031   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0907 00:48:30.715374   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.715847   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.715866   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.716151   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.716316   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.750129   47297 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:48:30.751568   47297 start.go:298] selected driver: kvm2
	I0907 00:48:30.751584   47297 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.751680   47297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:48:30.752362   47297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.752458   47297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:48:30.765932   47297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:48:30.766254   47297 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:48:30.766285   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:48:30.766297   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:48:30.766312   47297 start_flags.go:321] config:
	{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-77346
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.766449   47297 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.768165   47297 out.go:177] * Starting control plane node default-k8s-diff-port-773466 in cluster default-k8s-diff-port-773466
	I0907 00:48:28.807066   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:30.769579   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:48:30.769605   47297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:48:30.769618   47297 cache.go:57] Caching tarball of preloaded images
	I0907 00:48:30.769690   47297 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:48:30.769700   47297 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:48:30.769802   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:48:30.769965   47297 start.go:365] acquiring machines lock for default-k8s-diff-port-773466: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:48:34.886988   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:37.959093   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:44.039083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:47.111100   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:53.191104   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:56.263090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:02.343026   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:05.415059   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:11.495064   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:14.567091   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:20.647045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:23.719041   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:29.799012   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:32.871070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:38.951073   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:42.023127   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:48.103090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:51.175063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:57.255062   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:00.327063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:06.407045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:09.479083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:15.559056   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:18.631050   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:24.711070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:27.783032   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:30.786864   46768 start.go:369] acquired machines lock for "no-preload-321164" in 3m55.470116528s
	I0907 00:50:30.786911   46768 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:30.786932   46768 fix.go:54] fixHost starting: 
	I0907 00:50:30.787365   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:30.787402   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:30.802096   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0907 00:50:30.802471   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:30.803040   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:50:30.803070   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:30.803390   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:30.803609   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:30.803735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:50:30.805366   46768 fix.go:102] recreateIfNeeded on no-preload-321164: state=Stopped err=<nil>
	I0907 00:50:30.805394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	W0907 00:50:30.805601   46768 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:30.807478   46768 out.go:177] * Restarting existing kvm2 VM for "no-preload-321164" ...
	I0907 00:50:30.784621   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:30.784665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:50:30.786659   46354 machine.go:91] provisioned docker machine in 4m37.428246924s
	I0907 00:50:30.786707   46354 fix.go:56] fixHost completed within 4m37.448613342s
	I0907 00:50:30.786715   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 4m37.448629588s
	W0907 00:50:30.786743   46354 start.go:672] error starting host: provision: host is not running
	W0907 00:50:30.786862   46354 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:50:30.786876   46354 start.go:687] Will try again in 5 seconds ...
	I0907 00:50:30.809015   46768 main.go:141] libmachine: (no-preload-321164) Calling .Start
	I0907 00:50:30.809182   46768 main.go:141] libmachine: (no-preload-321164) Ensuring networks are active...
	I0907 00:50:30.809827   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network default is active
	I0907 00:50:30.810153   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network mk-no-preload-321164 is active
	I0907 00:50:30.810520   46768 main.go:141] libmachine: (no-preload-321164) Getting domain xml...
	I0907 00:50:30.811434   46768 main.go:141] libmachine: (no-preload-321164) Creating domain...
	I0907 00:50:32.024103   46768 main.go:141] libmachine: (no-preload-321164) Waiting to get IP...
	I0907 00:50:32.024955   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.025314   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.025386   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.025302   47622 retry.go:31] will retry after 211.413529ms: waiting for machine to come up
	I0907 00:50:32.238887   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.239424   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.239452   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.239400   47622 retry.go:31] will retry after 306.62834ms: waiting for machine to come up
	I0907 00:50:32.547910   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.548378   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.548409   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.548318   47622 retry.go:31] will retry after 360.126343ms: waiting for machine to come up
	I0907 00:50:32.909809   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.910325   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.910356   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.910259   47622 retry.go:31] will retry after 609.953186ms: waiting for machine to come up
	I0907 00:50:33.522073   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:33.522437   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:33.522467   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:33.522382   47622 retry.go:31] will retry after 526.4152ms: waiting for machine to come up
	I0907 00:50:34.050028   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.050475   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.050503   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.050417   47622 retry.go:31] will retry after 748.311946ms: waiting for machine to come up
	I0907 00:50:34.799933   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.800367   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.800395   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.800321   47622 retry.go:31] will retry after 732.484316ms: waiting for machine to come up
	I0907 00:50:35.788945   46354 start.go:365] acquiring machines lock for old-k8s-version-940806: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:50:35.534154   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:35.534583   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:35.534606   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:35.534535   47622 retry.go:31] will retry after 1.217693919s: waiting for machine to come up
	I0907 00:50:36.754260   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:36.754682   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:36.754711   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:36.754634   47622 retry.go:31] will retry after 1.508287783s: waiting for machine to come up
	I0907 00:50:38.264195   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:38.264607   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:38.264630   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:38.264557   47622 retry.go:31] will retry after 1.481448978s: waiting for machine to come up
	I0907 00:50:39.748383   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:39.748865   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:39.748898   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:39.748803   47622 retry.go:31] will retry after 2.345045055s: waiting for machine to come up
	I0907 00:50:42.095158   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:42.095801   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:42.095832   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:42.095747   47622 retry.go:31] will retry after 3.269083195s: waiting for machine to come up
	I0907 00:50:45.369097   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:45.369534   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:45.369561   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:45.369448   47622 retry.go:31] will retry after 4.462134893s: waiting for machine to come up
	I0907 00:50:49.835862   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836273   46768 main.go:141] libmachine: (no-preload-321164) Found IP for machine: 192.168.61.125
	I0907 00:50:49.836315   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has current primary IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836342   46768 main.go:141] libmachine: (no-preload-321164) Reserving static IP address...
	I0907 00:50:49.836774   46768 main.go:141] libmachine: (no-preload-321164) Reserved static IP address: 192.168.61.125
	I0907 00:50:49.836794   46768 main.go:141] libmachine: (no-preload-321164) Waiting for SSH to be available...
	I0907 00:50:49.836827   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.836860   46768 main.go:141] libmachine: (no-preload-321164) DBG | skip adding static IP to network mk-no-preload-321164 - found existing host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"}
	I0907 00:50:49.836880   46768 main.go:141] libmachine: (no-preload-321164) DBG | Getting to WaitForSSH function...
	I0907 00:50:49.838931   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839299   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.839326   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839464   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH client type: external
	I0907 00:50:49.839500   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa (-rw-------)
	I0907 00:50:49.839538   46768 main.go:141] libmachine: (no-preload-321164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:50:49.839557   46768 main.go:141] libmachine: (no-preload-321164) DBG | About to run SSH command:
	I0907 00:50:49.839568   46768 main.go:141] libmachine: (no-preload-321164) DBG | exit 0
	I0907 00:50:49.930557   46768 main.go:141] libmachine: (no-preload-321164) DBG | SSH cmd err, output: <nil>: 
	I0907 00:50:49.931033   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetConfigRaw
	I0907 00:50:49.931662   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:49.934286   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934719   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.934755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934973   46768 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:50:49.935197   46768 machine.go:88] provisioning docker machine ...
	I0907 00:50:49.935221   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:49.935409   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935567   46768 buildroot.go:166] provisioning hostname "no-preload-321164"
	I0907 00:50:49.935586   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935730   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:49.937619   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.937879   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.937899   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.938049   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:49.938303   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938464   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938624   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:49.938803   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:49.939300   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:49.939315   46768 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-321164 && echo "no-preload-321164" | sudo tee /etc/hostname
	I0907 00:50:50.076488   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-321164
	
	I0907 00:50:50.076513   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.079041   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079362   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.079409   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079614   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.079831   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080013   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080183   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.080361   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.080757   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.080775   46768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-321164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-321164/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-321164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:50:51.203755   46833 start.go:369] acquired machines lock for "embed-certs-546209" in 4m11.274622402s
	I0907 00:50:51.203804   46833 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:51.203823   46833 fix.go:54] fixHost starting: 
	I0907 00:50:51.204233   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:51.204274   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:51.221096   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0907 00:50:51.221487   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:51.222026   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:50:51.222048   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:51.222401   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:51.222595   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:50:51.222757   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:50:51.224388   46833 fix.go:102] recreateIfNeeded on embed-certs-546209: state=Stopped err=<nil>
	I0907 00:50:51.224413   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	W0907 00:50:51.224585   46833 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:51.226812   46833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-546209" ...
	I0907 00:50:50.214796   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:50.215590   46768 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:50:50.215629   46768 buildroot.go:174] setting up certificates
	I0907 00:50:50.215639   46768 provision.go:83] configureAuth start
	I0907 00:50:50.215659   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:50.215952   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:50.218581   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.218947   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.218970   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.219137   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.221833   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222177   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.222221   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222323   46768 provision.go:138] copyHostCerts
	I0907 00:50:50.222377   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:50:50.222390   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:50:50.222497   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:50:50.222628   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:50:50.222646   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:50:50.222682   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:50:50.222765   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:50:50.222784   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:50:50.222817   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:50:50.222880   46768 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.no-preload-321164 san=[192.168.61.125 192.168.61.125 localhost 127.0.0.1 minikube no-preload-321164]
	I0907 00:50:50.456122   46768 provision.go:172] copyRemoteCerts
	I0907 00:50:50.456175   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:50:50.456198   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.458665   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459030   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.459053   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459237   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.459468   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.459630   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.459766   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:50.549146   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:50:50.572002   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 00:50:50.595576   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:50:50.618054   46768 provision.go:86] duration metric: configureAuth took 402.401011ms
	I0907 00:50:50.618086   46768 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:50:50.618327   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:50:50.618410   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.620908   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621255   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.621289   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621432   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.621619   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621752   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621879   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.622006   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.622586   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.622611   46768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:50:50.946938   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:50:50.946964   46768 machine.go:91] provisioned docker machine in 1.011750962s
	I0907 00:50:50.946975   46768 start.go:300] post-start starting for "no-preload-321164" (driver="kvm2")
	I0907 00:50:50.946989   46768 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:50:50.947015   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:50.947339   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:50:50.947367   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.950370   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950754   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.950798   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.951171   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.951331   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.951472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.040440   46768 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:50:51.044700   46768 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:50:51.044728   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:50:51.044816   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:50:51.044899   46768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:50:51.045018   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:50:51.053507   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:50:51.077125   46768 start.go:303] post-start completed in 130.134337ms
	I0907 00:50:51.077149   46768 fix.go:56] fixHost completed within 20.29021748s
	I0907 00:50:51.077174   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.079928   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080266   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.080297   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080516   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.080744   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.080909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.081080   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.081255   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:51.081837   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:51.081853   46768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:50:51.203596   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047851.182131777
	
	I0907 00:50:51.203636   46768 fix.go:206] guest clock: 1694047851.182131777
	I0907 00:50:51.203646   46768 fix.go:219] Guest: 2023-09-07 00:50:51.182131777 +0000 UTC Remote: 2023-09-07 00:50:51.077154021 +0000 UTC m=+255.896364351 (delta=104.977756ms)
	I0907 00:50:51.203664   46768 fix.go:190] guest clock delta is within tolerance: 104.977756ms
	I0907 00:50:51.203668   46768 start.go:83] releasing machines lock for "no-preload-321164", held for 20.416782491s
	I0907 00:50:51.203696   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.203977   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:51.207262   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207708   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.207755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207926   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208563   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208644   46768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:50:51.208692   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.208755   46768 ssh_runner.go:195] Run: cat /version.json
	I0907 00:50:51.208777   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.211412   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211453   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211863   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211901   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211931   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211957   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.212132   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212318   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212406   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212477   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212612   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.212722   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212875   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.300796   46768 ssh_runner.go:195] Run: systemctl --version
	I0907 00:50:51.324903   46768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:50:51.465767   46768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:50:51.471951   46768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:50:51.472036   46768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:50:51.488733   46768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:50:51.488761   46768 start.go:466] detecting cgroup driver to use...
	I0907 00:50:51.488831   46768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:50:51.501772   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:50:51.516019   46768 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:50:51.516083   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:50:51.530425   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:50:51.546243   46768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:50:51.649058   46768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:50:51.768622   46768 docker.go:212] disabling docker service ...
	I0907 00:50:51.768705   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:50:51.785225   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:50:51.797018   46768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:50:51.908179   46768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:50:52.021212   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:50:52.037034   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:50:52.055163   46768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:50:52.055218   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.065451   46768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:50:52.065520   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.076202   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.086865   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.096978   46768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:50:52.107492   46768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:50:52.117036   46768 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:50:52.117104   46768 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:50:52.130309   46768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:50:52.140016   46768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:50:52.249901   46768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:50:52.422851   46768 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:50:52.422928   46768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:50:52.427852   46768 start.go:534] Will wait 60s for crictl version
	I0907 00:50:52.427903   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.431904   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:50:52.472552   46768 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:50:52.472632   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.526514   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.580133   46768 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:50:51.228316   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Start
	I0907 00:50:51.228549   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring networks are active...
	I0907 00:50:51.229311   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network default is active
	I0907 00:50:51.229587   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network mk-embed-certs-546209 is active
	I0907 00:50:51.230001   46833 main.go:141] libmachine: (embed-certs-546209) Getting domain xml...
	I0907 00:50:51.230861   46833 main.go:141] libmachine: (embed-certs-546209) Creating domain...
	I0907 00:50:52.512329   46833 main.go:141] libmachine: (embed-certs-546209) Waiting to get IP...
	I0907 00:50:52.513160   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.513607   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.513709   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.513575   47738 retry.go:31] will retry after 266.575501ms: waiting for machine to come up
	I0907 00:50:52.782236   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.782674   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.782699   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.782623   47738 retry.go:31] will retry after 258.252832ms: waiting for machine to come up
	I0907 00:50:53.042276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.042851   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.042886   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.042799   47738 retry.go:31] will retry after 480.751908ms: waiting for machine to come up
	I0907 00:50:53.525651   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.526280   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.526314   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.526222   47738 retry.go:31] will retry after 592.373194ms: waiting for machine to come up
	I0907 00:50:54.119935   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.120401   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.120440   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.120320   47738 retry.go:31] will retry after 602.269782ms: waiting for machine to come up
	I0907 00:50:54.723919   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.724403   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.724429   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.724356   47738 retry.go:31] will retry after 631.28427ms: waiting for machine to come up
	I0907 00:50:52.581522   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:52.584587   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.584995   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:52.585027   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.585212   46768 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:50:52.589138   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:50:52.602205   46768 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:50:52.602259   46768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:50:52.633785   46768 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:50:52.633808   46768 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:50:52.633868   46768 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.633887   46768 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.633889   46768 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.633929   46768 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0907 00:50:52.633954   46768 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.633849   46768 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.633937   46768 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.634076   46768 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635447   46768 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.635516   46768 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.635529   46768 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.635578   46768 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.635583   46768 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0907 00:50:52.635587   46768 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.868791   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917664   46768 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0907 00:50:52.917705   46768 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917740   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.921520   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.924174   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.924775   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0907 00:50:52.926455   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.927265   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.936511   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.936550   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.989863   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0907 00:50:52.989967   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.081783   46768 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0907 00:50:53.081828   46768 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.081876   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.200951   46768 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0907 00:50:53.200999   46768 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.201037   46768 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0907 00:50:53.201055   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201074   46768 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.201115   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201120   46768 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0907 00:50:53.201138   46768 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.201163   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201196   46768 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0907 00:50:53.201208   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0907 00:50:53.201220   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201222   46768 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:53.201245   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201254   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201257   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.213879   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.213909   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.214030   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.559290   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.356797   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:55.357248   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:55.357276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:55.357208   47738 retry.go:31] will retry after 957.470134ms: waiting for machine to come up
	I0907 00:50:56.316920   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:56.317410   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:56.317437   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:56.317357   47738 retry.go:31] will retry after 929.647798ms: waiting for machine to come up
	I0907 00:50:57.249114   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:57.249599   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:57.249631   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:57.249548   47738 retry.go:31] will retry after 1.218276188s: waiting for machine to come up
	I0907 00:50:58.470046   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:58.470509   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:58.470539   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:58.470461   47738 retry.go:31] will retry after 2.324175972s: waiting for machine to come up
	I0907 00:50:55.219723   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.018454399s)
	I0907 00:50:55.219753   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0907 00:50:55.219835   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0: (2.018563387s)
	I0907 00:50:55.219874   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0907 00:50:55.219897   46768 ssh_runner.go:235] Completed: which crictl: (2.01861063s)
	I0907 00:50:55.219931   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1: (2.006023749s)
	I0907 00:50:55.219956   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:55.219965   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0907 00:50:55.219974   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:55.220018   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.220026   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1: (2.006085999s)
	I0907 00:50:55.220034   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1: (2.005987599s)
	I0907 00:50:55.220056   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0907 00:50:55.220062   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0907 00:50:55.220065   46768 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.660750078s)
	I0907 00:50:55.220091   46768 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0907 00:50:55.220107   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:50:55.220139   46768 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.220178   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:55.220141   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:50:55.263187   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0907 00:50:55.263256   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0907 00:50:55.263276   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263282   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0907 00:50:55.263291   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:50:55.263321   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263334   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0907 00:50:55.263428   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0907 00:50:55.263432   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.275710   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0907 00:50:58.251089   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.987744073s)
	I0907 00:50:58.251119   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0907 00:50:58.251125   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.987662447s)
	I0907 00:50:58.251143   46768 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251164   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0907 00:50:58.251192   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251253   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:50:58.256733   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0907 00:51:00.798145   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:00.798673   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:00.798702   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:00.798607   47738 retry.go:31] will retry after 1.874271621s: waiting for machine to come up
	I0907 00:51:02.674532   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:02.675085   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:02.675117   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:02.675050   47738 retry.go:31] will retry after 2.9595889s: waiting for machine to come up
	I0907 00:51:04.952628   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.701410779s)
	I0907 00:51:04.952741   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0907 00:51:04.952801   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:04.952854   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:05.636309   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:05.636744   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:05.636779   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:05.636694   47738 retry.go:31] will retry after 4.45645523s: waiting for machine to come up
	I0907 00:51:06.100759   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.147880303s)
	I0907 00:51:06.100786   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0907 00:51:06.100803   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:06.100844   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:08.663694   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.56282168s)
	I0907 00:51:08.663725   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0907 00:51:08.663754   46768 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:08.663803   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:10.023202   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.359374479s)
	I0907 00:51:10.023234   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0907 00:51:10.023276   46768 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:10.023349   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:11.739345   47297 start.go:369] acquired machines lock for "default-k8s-diff-port-773466" in 2m40.969329009s
	I0907 00:51:11.739394   47297 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:11.739419   47297 fix.go:54] fixHost starting: 
	I0907 00:51:11.739834   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:11.739870   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:11.755796   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0907 00:51:11.756102   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:11.756564   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:51:11.756588   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:11.756875   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:11.757032   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:11.757185   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:51:11.758750   47297 fix.go:102] recreateIfNeeded on default-k8s-diff-port-773466: state=Stopped err=<nil>
	I0907 00:51:11.758772   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	W0907 00:51:11.758955   47297 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:11.761066   47297 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-773466" ...
	I0907 00:51:10.095825   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096285   46833 main.go:141] libmachine: (embed-certs-546209) Found IP for machine: 192.168.50.242
	I0907 00:51:10.096312   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has current primary IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096321   46833 main.go:141] libmachine: (embed-certs-546209) Reserving static IP address...
	I0907 00:51:10.096706   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.096731   46833 main.go:141] libmachine: (embed-certs-546209) Reserved static IP address: 192.168.50.242
	I0907 00:51:10.096750   46833 main.go:141] libmachine: (embed-certs-546209) DBG | skip adding static IP to network mk-embed-certs-546209 - found existing host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"}
	I0907 00:51:10.096766   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Getting to WaitForSSH function...
	I0907 00:51:10.096777   46833 main.go:141] libmachine: (embed-certs-546209) Waiting for SSH to be available...
	I0907 00:51:10.098896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099227   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.099260   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099360   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH client type: external
	I0907 00:51:10.099382   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa (-rw-------)
	I0907 00:51:10.099412   46833 main.go:141] libmachine: (embed-certs-546209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:10.099428   46833 main.go:141] libmachine: (embed-certs-546209) DBG | About to run SSH command:
	I0907 00:51:10.099444   46833 main.go:141] libmachine: (embed-certs-546209) DBG | exit 0
	I0907 00:51:10.199038   46833 main.go:141] libmachine: (embed-certs-546209) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:10.199377   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetConfigRaw
	I0907 00:51:10.200006   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.202924   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203328   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.203352   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203576   46833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:51:10.203879   46833 machine.go:88] provisioning docker machine ...
	I0907 00:51:10.203908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:10.204125   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204290   46833 buildroot.go:166] provisioning hostname "embed-certs-546209"
	I0907 00:51:10.204312   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204489   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.206898   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207332   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.207365   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207473   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.207643   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207791   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207920   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.208080   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.208476   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.208496   46833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-546209 && echo "embed-certs-546209" | sudo tee /etc/hostname
	I0907 00:51:10.356060   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-546209
	
	I0907 00:51:10.356098   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.359533   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.359867   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.359896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.360097   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.360284   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360435   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360629   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.360820   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.361504   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.361538   46833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-546209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-546209/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-546209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:10.503181   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:10.503211   46833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:10.503238   46833 buildroot.go:174] setting up certificates
	I0907 00:51:10.503246   46833 provision.go:83] configureAuth start
	I0907 00:51:10.503254   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.503555   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.506514   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.506930   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.506955   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.507150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.509772   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510081   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.510111   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510215   46833 provision.go:138] copyHostCerts
	I0907 00:51:10.510281   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:10.510292   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:10.510345   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:10.510438   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:10.510446   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:10.510466   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:10.510552   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:10.510559   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:10.510579   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:10.510638   46833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.embed-certs-546209 san=[192.168.50.242 192.168.50.242 localhost 127.0.0.1 minikube embed-certs-546209]
	I0907 00:51:10.947044   46833 provision.go:172] copyRemoteCerts
	I0907 00:51:10.947101   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:10.947122   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.949879   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950221   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.950251   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.950660   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.950849   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.950993   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.052610   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:11.077082   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0907 00:51:11.100979   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:11.124155   46833 provision.go:86] duration metric: configureAuth took 620.900948ms
	I0907 00:51:11.124176   46833 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:11.124389   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:11.124456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.127163   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127498   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.127536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127813   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.128011   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128201   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128381   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.128560   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.129185   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.129214   46833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:11.467260   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:11.467297   46833 machine.go:91] provisioned docker machine in 1.263400182s
	I0907 00:51:11.467309   46833 start.go:300] post-start starting for "embed-certs-546209" (driver="kvm2")
	I0907 00:51:11.467321   46833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:11.467343   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.467669   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:11.467715   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.470299   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470675   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.470705   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470846   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.471038   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.471191   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.471435   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.568708   46833 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:11.573505   46833 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:11.573533   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:11.573595   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:11.573669   46833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:11.573779   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:11.582612   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.607383   46833 start.go:303] post-start completed in 140.062214ms
	I0907 00:51:11.607400   46833 fix.go:56] fixHost completed within 20.403578781s
	I0907 00:51:11.607419   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.609882   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610233   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.610265   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610411   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.610602   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610792   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610972   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.611161   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.611550   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.611563   46833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:11.739146   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047871.687486971
	
	I0907 00:51:11.739167   46833 fix.go:206] guest clock: 1694047871.687486971
	I0907 00:51:11.739176   46833 fix.go:219] Guest: 2023-09-07 00:51:11.687486971 +0000 UTC Remote: 2023-09-07 00:51:11.607403696 +0000 UTC m=+271.818672785 (delta=80.083275ms)
	I0907 00:51:11.739196   46833 fix.go:190] guest clock delta is within tolerance: 80.083275ms
	I0907 00:51:11.739202   46833 start.go:83] releasing machines lock for "embed-certs-546209", held for 20.535419293s
	I0907 00:51:11.739232   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.739478   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:11.742078   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742446   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.742474   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742676   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743172   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743342   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743422   46833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:11.743470   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.743541   46833 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:11.743573   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.746120   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746484   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.746516   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746640   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.746843   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.746989   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747015   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.747044   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.747169   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.747179   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.747394   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.747556   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747717   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.839831   46833 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:11.861736   46833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:12.006017   46833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:12.011678   46833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:12.011739   46833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:12.026851   46833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:12.026871   46833 start.go:466] detecting cgroup driver to use...
	I0907 00:51:12.026934   46833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:12.040077   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:12.052962   46833 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:12.053018   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:12.066509   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:12.079587   46833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:12.189043   46833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:12.310997   46833 docker.go:212] disabling docker service ...
	I0907 00:51:12.311065   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:12.324734   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:12.336808   46833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:12.461333   46833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:12.584841   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:12.598337   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:12.615660   46833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:12.615736   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.626161   46833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:12.626232   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.637475   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.647631   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.658444   46833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:12.669167   46833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:12.678558   46833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:12.678614   46833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:12.692654   46833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:12.703465   46833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:12.820819   46833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:12.996574   46833 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:12.996650   46833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:13.002744   46833 start.go:534] Will wait 60s for crictl version
	I0907 00:51:13.002818   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:51:13.007287   46833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:13.042173   46833 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:13.042254   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.090562   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.145112   46833 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:13.146767   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:13.149953   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150357   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:13.150388   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150603   46833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:13.154792   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:13.166540   46833 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:13.166607   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:13.203316   46833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:13.203391   46833 ssh_runner.go:195] Run: which lz4
	I0907 00:51:13.207399   46833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:13.211826   46833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:13.211854   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:10.979891   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0907 00:51:10.979935   46768 cache_images.go:123] Successfully loaded all cached images
	I0907 00:51:10.979942   46768 cache_images.go:92] LoadImages completed in 18.346122768s
	I0907 00:51:10.980017   46768 ssh_runner.go:195] Run: crio config
	I0907 00:51:11.044573   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:11.044595   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:11.044612   46768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:11.044630   46768 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-321164 NodeName:no-preload-321164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:11.044749   46768 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-321164"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:11.044807   46768 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-321164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:11.044852   46768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:11.055469   46768 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:11.055527   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:11.063642   46768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0907 00:51:11.081151   46768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:11.098623   46768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0907 00:51:11.116767   46768 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:11.120552   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:11.133845   46768 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164 for IP: 192.168.61.125
	I0907 00:51:11.133876   46768 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:11.134026   46768 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:11.134092   46768 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:11.134173   46768 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.key
	I0907 00:51:11.134216   46768 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key.05d6cdfc
	I0907 00:51:11.134252   46768 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key
	I0907 00:51:11.134393   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:11.134436   46768 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:11.134455   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:11.134488   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:11.134512   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:11.134534   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:11.134576   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.135184   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:11.161212   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:11.185797   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:11.209084   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:11.233001   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:11.255646   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:11.278323   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:11.301913   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:11.324316   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:11.349950   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:11.375738   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:11.402735   46768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:11.421372   46768 ssh_runner.go:195] Run: openssl version
	I0907 00:51:11.426855   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:11.436392   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440778   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.446374   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:11.455773   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:11.465073   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470197   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470243   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.475740   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:11.484993   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:11.494256   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498766   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.504037   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:11.512896   46768 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:11.517289   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:11.523115   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:11.528780   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:11.534330   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:11.539777   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:11.545439   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:11.550878   46768 kubeadm.go:404] StartCluster: {Name:no-preload-321164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:11.550968   46768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:11.551014   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:11.582341   46768 cri.go:89] found id: ""
	I0907 00:51:11.582409   46768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:11.591760   46768 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:11.591782   46768 kubeadm.go:636] restartCluster start
	I0907 00:51:11.591825   46768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:11.600241   46768 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.601258   46768 kubeconfig.go:92] found "no-preload-321164" server: "https://192.168.61.125:8443"
	I0907 00:51:11.603775   46768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:11.612221   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.612268   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.622330   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.622348   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.622392   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.632889   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.133626   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.133726   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.144713   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.633065   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.633145   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.648698   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.133304   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.133401   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.146822   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.633303   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.633374   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.648566   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.132966   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.133041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.147847   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.633090   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.633177   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.648893   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.133388   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.133465   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.149162   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.762623   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Start
	I0907 00:51:11.762823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring networks are active...
	I0907 00:51:11.763580   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network default is active
	I0907 00:51:11.764022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network mk-default-k8s-diff-port-773466 is active
	I0907 00:51:11.764494   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Getting domain xml...
	I0907 00:51:11.765139   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Creating domain...
	I0907 00:51:13.032555   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting to get IP...
	I0907 00:51:13.033441   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.033855   47907 retry.go:31] will retry after 214.721735ms: waiting for machine to come up
	I0907 00:51:13.250549   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251062   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251090   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.251001   47907 retry.go:31] will retry after 260.305773ms: waiting for machine to come up
	I0907 00:51:13.512603   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513144   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513175   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.513088   47907 retry.go:31] will retry after 293.213959ms: waiting for machine to come up
	I0907 00:51:13.807649   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.808128   47907 retry.go:31] will retry after 455.70029ms: waiting for machine to come up
	I0907 00:51:14.265914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266412   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266444   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:14.266367   47907 retry.go:31] will retry after 761.48199ms: waiting for machine to come up
	I0907 00:51:15.029446   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029916   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029950   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.029868   47907 retry.go:31] will retry after 889.947924ms: waiting for machine to come up
	I0907 00:51:15.079606   46833 crio.go:444] Took 1.872243 seconds to copy over tarball
	I0907 00:51:15.079679   46833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:18.068521   46833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988813422s)
	I0907 00:51:18.068547   46833 crio.go:451] Took 2.988919 seconds to extract the tarball
	I0907 00:51:18.068557   46833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:18.109973   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:18.154472   46833 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:18.154493   46833 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:18.154568   46833 ssh_runner.go:195] Run: crio config
	I0907 00:51:18.216517   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:18.216549   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:18.216571   46833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:18.216597   46833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-546209 NodeName:embed-certs-546209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:18.216747   46833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-546209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:18.216815   46833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-546209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:18.216863   46833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:18.230093   46833 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:18.230164   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:18.239087   46833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0907 00:51:18.256683   46833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:18.274030   46833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0907 00:51:18.294711   46833 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:18.299655   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:18.312980   46833 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209 for IP: 192.168.50.242
	I0907 00:51:18.313028   46833 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:18.313215   46833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:18.313283   46833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:18.313382   46833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/client.key
	I0907 00:51:18.313446   46833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key.5dc0f9a1
	I0907 00:51:18.313495   46833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key
	I0907 00:51:18.313607   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:18.313633   46833 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:18.313640   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:18.313665   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:18.313688   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:18.313709   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:18.313747   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:18.314356   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:18.344731   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:18.368872   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:18.397110   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:51:18.424441   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:18.452807   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:18.481018   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:18.509317   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:18.541038   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:18.565984   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:18.590863   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:18.614083   46833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:18.631295   46833 ssh_runner.go:195] Run: openssl version
	I0907 00:51:18.637229   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:18.651999   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.656999   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.657052   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.663109   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:18.675826   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:18.688358   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693281   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693331   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.699223   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:18.711511   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:18.724096   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729285   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729338   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.735410   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:18.747948   46833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:18.753003   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:18.759519   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:18.765813   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:18.772328   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:18.778699   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:18.785207   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:18.791515   46833 kubeadm.go:404] StartCluster: {Name:embed-certs-546209 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:18.791636   46833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:18.791719   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:18.831468   46833 cri.go:89] found id: ""
	I0907 00:51:18.831544   46833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:18.843779   46833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:18.843805   46833 kubeadm.go:636] restartCluster start
	I0907 00:51:18.843863   46833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:18.854604   46833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.855622   46833 kubeconfig.go:92] found "embed-certs-546209" server: "https://192.168.50.242:8443"
	I0907 00:51:18.857679   46833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:18.867583   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.867640   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.879567   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.879587   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.879634   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.891098   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.391839   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.391932   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.405078   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.633045   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.633128   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.644837   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.133842   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.133926   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.148072   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.633750   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.633828   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.648961   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.133669   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.133757   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.148342   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.633967   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.634076   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.649188   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.133815   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.133917   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.148350   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.633962   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.634047   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.649195   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.133733   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.133821   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.145109   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.633727   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.633808   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.645272   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.133921   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.133990   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.145494   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.920914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921395   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921430   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.921325   47907 retry.go:31] will retry after 952.422054ms: waiting for machine to come up
	I0907 00:51:16.875800   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876319   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876356   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:16.876272   47907 retry.go:31] will retry after 1.481584671s: waiting for machine to come up
	I0907 00:51:18.359815   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360308   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:18.360185   47907 retry.go:31] will retry after 1.355619716s: waiting for machine to come up
	I0907 00:51:19.717081   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717458   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717485   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:19.717419   47907 retry.go:31] will retry after 1.450172017s: waiting for machine to come up
	I0907 00:51:19.892019   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.038702   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.051318   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.391913   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.404956   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.891503   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.891594   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.904473   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.391486   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.391563   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.405726   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.891257   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.891337   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.905422   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.392028   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.392137   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.408621   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.891926   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.892033   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.906116   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.391605   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.391684   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.404834   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.891360   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.891447   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.908340   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:24.391916   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.392007   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.408806   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.633099   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.633200   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.644181   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.133144   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.133227   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.144139   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.612786   46768 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:21.612814   46768 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:21.612826   46768 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:21.612881   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:21.643142   46768 cri.go:89] found id: ""
	I0907 00:51:21.643216   46768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:21.658226   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:21.666895   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:21.666960   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675285   46768 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675317   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:21.817664   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.473084   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.670341   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.752820   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.842789   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:22.842868   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:22.861783   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.383385   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.884041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.384065   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.884077   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:21.168650   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169014   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169037   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:21.168966   47907 retry.go:31] will retry after 2.876055316s: waiting for machine to come up
	I0907 00:51:24.046598   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.046990   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.047020   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:24.046937   47907 retry.go:31] will retry after 2.837607521s: waiting for machine to come up
	I0907 00:51:24.891477   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.891564   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.908102   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.391625   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.391704   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.408399   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.892052   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.892166   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.909608   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.391529   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.391610   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.407459   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.891930   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.891994   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.908217   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.391898   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.404370   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.891918   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.892001   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.904988   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.391570   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:28.391650   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:28.403968   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.868619   46833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:28.868666   46833 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:28.868679   46833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:28.868736   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:28.907258   46833 cri.go:89] found id: ""
	I0907 00:51:28.907332   46833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:28.926539   46833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:28.938760   46833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:28.938837   46833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950550   46833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950576   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:29.092484   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:25.383423   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:25.413853   46768 api_server.go:72] duration metric: took 2.571070768s to wait for apiserver process to appear ...
	I0907 00:51:25.413877   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:25.413895   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.168577   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.168617   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.168629   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.228753   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.228785   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.729501   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.735318   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:29.735345   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:26.886341   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886797   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886819   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:26.886742   47907 retry.go:31] will retry after 3.776269501s: waiting for machine to come up
	I0907 00:51:30.665170   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.665736   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Found IP for machine: 192.168.39.96
	I0907 00:51:30.665770   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserving static IP address...
	I0907 00:51:30.665788   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has current primary IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.666183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.666226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | skip adding static IP to network mk-default-k8s-diff-port-773466 - found existing host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"}
	I0907 00:51:30.666245   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserved static IP address: 192.168.39.96
	I0907 00:51:30.666262   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for SSH to be available...
	I0907 00:51:30.666279   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Getting to WaitForSSH function...
	I0907 00:51:30.668591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.229871   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.240735   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:30.240764   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:30.729911   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.736989   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:51:30.746939   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:30.746964   46768 api_server.go:131] duration metric: took 5.333080985s to wait for apiserver health ...
	I0907 00:51:30.746973   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:30.746979   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:30.748709   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:32.716941   46354 start.go:369] acquired machines lock for "old-k8s-version-940806" in 56.927952192s
	I0907 00:51:32.717002   46354 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:32.717014   46354 fix.go:54] fixHost starting: 
	I0907 00:51:32.717431   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:32.717466   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:32.735021   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I0907 00:51:32.735485   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:32.736057   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:51:32.736083   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:32.736457   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:32.736713   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:32.736903   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:51:32.738719   46354 fix.go:102] recreateIfNeeded on old-k8s-version-940806: state=Stopped err=<nil>
	I0907 00:51:32.738743   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	W0907 00:51:32.738924   46354 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:32.740721   46354 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-940806" ...
	I0907 00:51:32.742202   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Start
	I0907 00:51:32.742362   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring networks are active...
	I0907 00:51:32.743087   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network default is active
	I0907 00:51:32.743499   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network mk-old-k8s-version-940806 is active
	I0907 00:51:32.743863   46354 main.go:141] libmachine: (old-k8s-version-940806) Getting domain xml...
	I0907 00:51:32.744603   46354 main.go:141] libmachine: (old-k8s-version-940806) Creating domain...
	I0907 00:51:30.668969   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.670773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.670838   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH client type: external
	I0907 00:51:30.670876   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa (-rw-------)
	I0907 00:51:30.670918   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:30.670934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | About to run SSH command:
	I0907 00:51:30.670947   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | exit 0
	I0907 00:51:30.770939   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:30.771333   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetConfigRaw
	I0907 00:51:30.772100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:30.775128   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775616   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.775654   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775923   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:51:30.776161   47297 machine.go:88] provisioning docker machine ...
	I0907 00:51:30.776180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:30.776399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776597   47297 buildroot.go:166] provisioning hostname "default-k8s-diff-port-773466"
	I0907 00:51:30.776618   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776805   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.779367   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.779761   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.779793   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.780022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.780238   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780534   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.780687   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.781088   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.781102   47297 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-773466 && echo "default-k8s-diff-port-773466" | sudo tee /etc/hostname
	I0907 00:51:30.932287   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-773466
	
	I0907 00:51:30.932320   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.935703   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936111   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.936146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936324   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.936647   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.936851   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.937054   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.937266   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.937890   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.937932   47297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-773466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-773466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-773466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:31.091619   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:31.091654   47297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:31.091707   47297 buildroot.go:174] setting up certificates
	I0907 00:51:31.091724   47297 provision.go:83] configureAuth start
	I0907 00:51:31.091746   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:31.092066   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:31.095183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095670   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.095710   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095861   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.098597   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.098887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.098962   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.099205   47297 provision.go:138] copyHostCerts
	I0907 00:51:31.099275   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:31.099291   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:31.099362   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:31.099516   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:31.099531   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:31.099563   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:31.099658   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:31.099671   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:31.099700   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:31.099807   47297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-773466 san=[192.168.39.96 192.168.39.96 localhost 127.0.0.1 minikube default-k8s-diff-port-773466]
	I0907 00:51:31.793599   47297 provision.go:172] copyRemoteCerts
	I0907 00:51:31.793653   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:31.793676   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.796773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797153   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.797192   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797362   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:31.797578   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:31.797751   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:31.797865   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:31.903781   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:31.935908   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0907 00:51:31.967385   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:51:31.998542   47297 provision.go:86] duration metric: configureAuth took 906.744341ms
	I0907 00:51:31.998576   47297 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:31.998836   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:31.998941   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.002251   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.002747   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002996   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.003300   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003717   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.003996   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.004637   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.004662   47297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:32.413687   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:32.413765   47297 machine.go:91] provisioned docker machine in 1.637590059s
	I0907 00:51:32.413777   47297 start.go:300] post-start starting for "default-k8s-diff-port-773466" (driver="kvm2")
	I0907 00:51:32.413787   47297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:32.413823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.414183   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:32.414227   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.417432   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.417894   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.417954   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.418202   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.418371   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.418517   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.418625   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.523519   47297 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:32.528959   47297 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:32.528983   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:32.529050   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:32.529144   47297 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:32.529249   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:32.538827   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:32.569792   47297 start.go:303] post-start completed in 156.000078ms
	I0907 00:51:32.569819   47297 fix.go:56] fixHost completed within 20.830399155s
	I0907 00:51:32.569860   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.573180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573599   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.573653   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573846   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.574100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574292   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574470   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.574658   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.575266   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.575282   47297 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:32.716793   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047892.656226759
	
	I0907 00:51:32.716819   47297 fix.go:206] guest clock: 1694047892.656226759
	I0907 00:51:32.716829   47297 fix.go:219] Guest: 2023-09-07 00:51:32.656226759 +0000 UTC Remote: 2023-09-07 00:51:32.569839112 +0000 UTC m=+181.933138455 (delta=86.387647ms)
	I0907 00:51:32.716855   47297 fix.go:190] guest clock delta is within tolerance: 86.387647ms
	I0907 00:51:32.716868   47297 start.go:83] releasing machines lock for "default-k8s-diff-port-773466", held for 20.977496549s
	I0907 00:51:32.716900   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.717205   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:32.720353   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.720794   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.720825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.721001   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721675   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721767   47297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:32.721813   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.721925   47297 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:32.721951   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.724909   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725154   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725464   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725510   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725626   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725808   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.725825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725845   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725869   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725967   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726058   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.726164   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.726216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726352   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.845353   47297 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:32.851616   47297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:33.005642   47297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:33.013527   47297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:33.013603   47297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:33.033433   47297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:33.033467   47297 start.go:466] detecting cgroup driver to use...
	I0907 00:51:33.033538   47297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:33.055861   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:33.073405   47297 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:33.073477   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:33.090484   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:33.104735   47297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:33.245072   47297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:33.411559   47297 docker.go:212] disabling docker service ...
	I0907 00:51:33.411625   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:33.429768   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:33.446597   47297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:33.581915   47297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:33.704648   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:33.721447   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:33.740243   47297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:33.740330   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.750871   47297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:33.750937   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.761620   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.774350   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.787718   47297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:33.802740   47297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:33.814899   47297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:33.814975   47297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:33.832422   47297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:33.844513   47297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:34.020051   47297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:34.252339   47297 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:34.252415   47297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:34.258055   47297 start.go:534] Will wait 60s for crictl version
	I0907 00:51:34.258179   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:51:34.262511   47297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:34.304552   47297 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:34.304626   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.376009   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.448097   47297 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:29.972856   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.178016   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.291593   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.385791   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:30.385865   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.404991   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.926995   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.427043   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.927049   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.426422   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.927274   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.955713   46833 api_server.go:72] duration metric: took 2.569919035s to wait for apiserver process to appear ...
	I0907 00:51:32.955739   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:32.955757   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.956284   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:32.956316   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.957189   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:33.457905   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:30.750097   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:30.784742   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:30.828002   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:30.852490   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:30.852534   46768 system_pods.go:61] "coredns-5dd5756b68-6ndjc" [8f1f8224-b8b4-4fb6-8f6b-2f4a0fb18e17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:30.852547   46768 system_pods.go:61] "etcd-no-preload-321164" [c4b2427c-d882-4d29-af41-553961e5ee48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:30.852559   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [339ca32b-a5a1-474c-a5db-c35e7f87506d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:30.852569   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [36241c8a-13ce-4e68-887b-ed929258d688] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:30.852581   46768 system_pods.go:61] "kube-proxy-f7dm4" [69308cf3-c18e-4edb-b0ea-c7f34a51aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:30.852595   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [e9b14f0e-7789-4d1d-9a15-02c88d4a1e3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:30.852606   46768 system_pods.go:61] "metrics-server-57f55c9bc5-s95n2" [938af7b2-936b-495c-84c9-d580ae646926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:30.852622   46768 system_pods.go:61] "storage-provisioner" [70c690a6-a383-4b3f-9817-954056580009] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:30.852633   46768 system_pods.go:74] duration metric: took 24.608458ms to wait for pod list to return data ...
	I0907 00:51:30.852646   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:30.860785   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:30.860811   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:30.860821   46768 node_conditions.go:105] duration metric: took 8.167675ms to run NodePressure ...
	I0907 00:51:30.860837   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:31.343033   46768 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349908   46768 kubeadm.go:787] kubelet initialised
	I0907 00:51:31.349936   46768 kubeadm.go:788] duration metric: took 6.87538ms waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349944   46768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:31.366931   46768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:33.392559   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:34.449546   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:34.452803   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453196   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:34.453226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453551   47297 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:34.459166   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:34.475045   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:34.475159   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:34.525380   47297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:34.525495   47297 ssh_runner.go:195] Run: which lz4
	I0907 00:51:34.530921   47297 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:34.537992   47297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:34.538062   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:34.298412   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting to get IP...
	I0907 00:51:34.299510   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.300108   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.300166   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.300103   48085 retry.go:31] will retry after 237.599934ms: waiting for machine to come up
	I0907 00:51:34.539798   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.540306   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.540406   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.540348   48085 retry.go:31] will retry after 321.765824ms: waiting for machine to come up
	I0907 00:51:34.864120   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.864735   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.864761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.864698   48085 retry.go:31] will retry after 485.375139ms: waiting for machine to come up
	I0907 00:51:35.351583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.352142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.352174   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.352081   48085 retry.go:31] will retry after 490.428576ms: waiting for machine to come up
	I0907 00:51:35.844432   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.844896   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.844921   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.844821   48085 retry.go:31] will retry after 610.440599ms: waiting for machine to come up
	I0907 00:51:36.456988   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:36.457697   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:36.457720   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:36.457634   48085 retry.go:31] will retry after 704.547341ms: waiting for machine to come up
	I0907 00:51:37.163551   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.163973   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.164001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.163926   48085 retry.go:31] will retry after 825.931424ms: waiting for machine to come up
	I0907 00:51:37.991936   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.992550   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.992583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.992489   48085 retry.go:31] will retry after 952.175868ms: waiting for machine to come up
	I0907 00:51:37.065943   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.065973   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.065987   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.176178   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.176213   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.457739   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.464386   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.464423   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:37.958094   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.966530   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.966561   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:38.458170   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:38.465933   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:51:38.477109   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:38.477135   46833 api_server.go:131] duration metric: took 5.521389594s to wait for apiserver health ...
	I0907 00:51:38.477143   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:38.477149   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:38.478964   46833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:38.480383   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:38.509844   46833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:38.549403   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:38.571430   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:38.571472   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:38.571491   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:38.571503   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:38.571563   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:38.571575   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:38.571592   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:38.571602   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:38.571613   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:38.571626   46833 system_pods.go:74] duration metric: took 22.19998ms to wait for pod list to return data ...
	I0907 00:51:38.571637   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:38.581324   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:38.581361   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:38.581373   46833 node_conditions.go:105] duration metric: took 9.730463ms to run NodePressure ...
	I0907 00:51:38.581393   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:39.140602   46833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:39.147994   46833 kubeadm.go:787] kubelet initialised
	I0907 00:51:39.148025   46833 kubeadm.go:788] duration metric: took 7.397807ms waiting for restarted kubelet to initialise ...
	I0907 00:51:39.148034   46833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:39.157241   46833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.172898   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172935   46833 pod_ready.go:81] duration metric: took 15.665673ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.172947   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172958   46833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.180630   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180666   46833 pod_ready.go:81] duration metric: took 7.698054ms waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.180679   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180692   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.202626   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202658   46833 pod_ready.go:81] duration metric: took 21.956163ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.202671   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202699   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.210817   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210849   46833 pod_ready.go:81] duration metric: took 8.138129ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.210860   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210882   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.801924   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801951   46833 pod_ready.go:81] duration metric: took 591.060955ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.801963   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801970   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:35.403877   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.394774   46768 pod_ready.go:92] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:36.394823   46768 pod_ready.go:81] duration metric: took 5.027852065s waiting for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:36.394839   46768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:38.429614   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.550649   47297 crio.go:444] Took 2.019779 seconds to copy over tarball
	I0907 00:51:36.550726   47297 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:40.133828   47297 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.583074443s)
	I0907 00:51:40.133861   47297 crio.go:451] Took 3.583177 seconds to extract the tarball
	I0907 00:51:40.133872   47297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:40.177675   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:40.230574   47297 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:40.230594   47297 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:40.230654   47297 ssh_runner.go:195] Run: crio config
	I0907 00:51:40.296445   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:51:40.296473   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:40.296497   47297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:40.296519   47297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-773466 NodeName:default-k8s-diff-port-773466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:40.296709   47297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-773466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:40.296793   47297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-773466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0907 00:51:40.296850   47297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:40.307543   47297 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:40.307642   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:40.318841   47297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0907 00:51:40.337125   47297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:40.354910   47297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0907 00:51:40.375283   47297 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:40.380206   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:40.394943   47297 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466 for IP: 192.168.39.96
	I0907 00:51:40.394980   47297 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.395194   47297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:40.395231   47297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:40.395295   47297 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.key
	I0907 00:51:40.410649   47297 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key.e8bbde58
	I0907 00:51:40.410724   47297 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key
	I0907 00:51:40.410868   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:40.410904   47297 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:40.410916   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:40.410942   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:40.410963   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:40.410985   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:40.411038   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:40.411575   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:40.441079   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:51:40.465854   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:40.495221   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:40.521493   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:40.548227   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:40.574366   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:40.599116   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:40.624901   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:40.650606   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:40.690154   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690183   46833 pod_ready.go:81] duration metric: took 888.205223ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.690194   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690204   46833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:40.697723   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697750   46833 pod_ready.go:81] duration metric: took 7.538932ms waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.697761   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697773   46833 pod_ready.go:38] duration metric: took 1.549726748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:40.697793   46833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:51:40.709255   46833 ops.go:34] apiserver oom_adj: -16
	I0907 00:51:40.709281   46833 kubeadm.go:640] restartCluster took 21.865468537s
	I0907 00:51:40.709290   46833 kubeadm.go:406] StartCluster complete in 21.917781616s
	I0907 00:51:40.709309   46833 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.709403   46833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:51:40.712326   46833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.808025   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:51:40.808158   46833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:51:40.808236   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:40.808285   46833 addons.go:69] Setting metrics-server=true in profile "embed-certs-546209"
	I0907 00:51:40.808309   46833 addons.go:231] Setting addon metrics-server=true in "embed-certs-546209"
	W0907 00:51:40.808317   46833 addons.go:240] addon metrics-server should already be in state true
	I0907 00:51:40.808252   46833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-546209"
	I0907 00:51:40.808340   46833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-546209"
	W0907 00:51:40.808354   46833 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:51:40.808375   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808390   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808257   46833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-546209"
	I0907 00:51:40.808493   46833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-546209"
	I0907 00:51:40.809864   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.809936   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810411   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810477   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810518   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810526   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.827159   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0907 00:51:40.827608   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0907 00:51:40.827784   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828059   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828326   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828354   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828556   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828579   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828955   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829067   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829670   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.829715   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.829932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.831070   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0907 00:51:40.831543   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.832142   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.832161   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.832527   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.834743   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.834801   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.853510   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0907 00:51:40.854194   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0907 00:51:40.854261   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.854987   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855019   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.855102   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.855381   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.855745   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.855791   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855808   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.856430   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.856882   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.858468   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.154848   46833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:51:40.859116   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.300012   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:51:41.362259   46833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:41.362296   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:51:41.362332   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.460930   46833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.460961   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:51:41.460988   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.464836   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465151   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465419   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465455   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465590   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465621   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465764   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465979   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466055   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466196   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466276   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.466309   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.587470   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.594683   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:51:41.594709   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:51:41.621438   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:51:41.621471   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:51:41.664886   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.664910   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:51:41.691795   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.886942   46833 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.078877765s)
	I0907 00:51:41.887038   46833 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:51:41.898851   46833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-546209" context rescaled to 1 replicas
	I0907 00:51:41.898900   46833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:51:42.014441   46833 out.go:177] * Verifying Kubernetes components...
	I0907 00:51:38.946740   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:38.947268   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:38.947292   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:38.947211   48085 retry.go:31] will retry after 1.334104337s: waiting for machine to come up
	I0907 00:51:40.282730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:40.283209   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:40.283233   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:40.283168   48085 retry.go:31] will retry after 1.521256667s: waiting for machine to come up
	I0907 00:51:41.806681   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:41.807182   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:41.807211   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:41.807126   48085 retry.go:31] will retry after 1.907600342s: waiting for machine to come up
	I0907 00:51:42.132070   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:51:42.150876   46833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-546209"
	W0907 00:51:42.150905   46833 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:51:42.150935   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:42.151329   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.151357   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.172605   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0907 00:51:42.173122   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.173662   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.173709   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.174155   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.174813   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.174877   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.196701   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0907 00:51:42.197287   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.197859   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.197882   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.198246   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.198418   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:42.200558   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:42.200942   46833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:42.200954   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:51:42.200967   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:42.204259   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.204952   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:42.204975   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:42.205009   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.205139   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:42.205280   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:42.205405   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:42.377838   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:43.286666   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.699154782s)
	I0907 00:51:43.286720   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.286734   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.287148   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.287174   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.287190   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.287210   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.287220   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.288970   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.289008   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.289021   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.436691   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.744844788s)
	I0907 00:51:43.436717   46833 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.304610389s)
	I0907 00:51:43.436744   46833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:43.436758   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436775   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.436862   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05899604s)
	I0907 00:51:43.436883   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436893   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438856   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.438887   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438903   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438907   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438914   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438919   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438924   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438934   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439020   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.439206   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439219   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439231   46833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-546209"
	I0907 00:51:43.439266   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439277   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439290   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.439299   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439502   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439513   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.442917   46833 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0907 00:51:43.444226   46833 addons.go:502] enable addons completed in 2.636061813s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0907 00:51:40.924494   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:42.925582   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:40.679951   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:40.859542   47297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:40.881658   47297 ssh_runner.go:195] Run: openssl version
	I0907 00:51:40.888518   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:40.902200   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908038   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908106   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.914418   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:40.927511   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:40.941360   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947556   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947622   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.953780   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:40.966576   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:40.981447   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989719   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989779   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:41.000685   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:41.017936   47297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:41.023280   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:41.029915   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:41.038011   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:41.044570   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:41.052534   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:41.060580   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:41.068664   47297 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:41.068776   47297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:41.068897   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:41.111849   47297 cri.go:89] found id: ""
	I0907 00:51:41.111923   47297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:41.126171   47297 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:41.126193   47297 kubeadm.go:636] restartCluster start
	I0907 00:51:41.126249   47297 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:41.138401   47297 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.139882   47297 kubeconfig.go:92] found "default-k8s-diff-port-773466" server: "https://192.168.39.96:8444"
	I0907 00:51:41.142907   47297 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:41.154285   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.154346   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.168992   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.169012   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.169057   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.183283   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.683942   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.684036   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.701647   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.183800   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.183882   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.213176   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.683460   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.683550   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.701805   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.184099   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.184206   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.202359   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.683466   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.683541   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.697133   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.183663   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.183750   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.201236   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.684320   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.684411   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.698198   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:45.183451   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.183533   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.197529   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.716005   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:43.716632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:43.716668   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:43.716570   48085 retry.go:31] will retry after 3.526983217s: waiting for machine to come up
	I0907 00:51:47.245213   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:47.245615   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:47.245645   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:47.245561   48085 retry.go:31] will retry after 3.453934877s: waiting for machine to come up
	I0907 00:51:45.450760   46833 node_ready.go:58] node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:47.949024   46833 node_ready.go:49] node "embed-certs-546209" has status "Ready":"True"
	I0907 00:51:47.949053   46833 node_ready.go:38] duration metric: took 4.512298071s waiting for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:47.949063   46833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:47.956755   46833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964323   46833 pod_ready.go:92] pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:47.964345   46833 pod_ready.go:81] duration metric: took 7.56298ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964356   46833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425347   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.425370   46768 pod_ready.go:81] duration metric: took 9.030524984s waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425380   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432508   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.432531   46768 pod_ready.go:81] duration metric: took 7.145112ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432545   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441245   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.441265   46768 pod_ready.go:81] duration metric: took 8.713177ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441275   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446603   46768 pod_ready.go:92] pod "kube-proxy-f7dm4" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.446627   46768 pod_ready.go:81] duration metric: took 5.346628ms waiting for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446641   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453061   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.453091   46768 pod_ready.go:81] duration metric: took 6.442457ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453104   46768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.730093   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:45.684191   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.684287   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.702020   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.183587   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.183697   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.201390   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.683442   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.683519   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.699015   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.183908   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.183998   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.196617   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.683929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.683991   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.696499   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.183929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.184000   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.197425   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.683932   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.684019   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.696986   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.184149   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.184224   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.197363   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.684066   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.684152   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.697853   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.183372   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.183490   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.195818   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.700500   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:50.700920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:50.700939   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:50.700882   48085 retry.go:31] will retry after 4.6319983s: waiting for machine to come up
	I0907 00:51:49.984505   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:51.987061   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:53.485331   46833 pod_ready.go:92] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.485356   46833 pod_ready.go:81] duration metric: took 5.520993929s waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.485368   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491351   46833 pod_ready.go:92] pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.491371   46833 pod_ready.go:81] duration metric: took 5.996687ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491387   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496425   46833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.496448   46833 pod_ready.go:81] duration metric: took 5.054087ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496460   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504963   46833 pod_ready.go:92] pod "kube-proxy-47255" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.504982   46833 pod_ready.go:81] duration metric: took 8.515814ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504990   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550180   46833 pod_ready.go:92] pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.550208   46833 pod_ready.go:81] duration metric: took 45.211992ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550222   46833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:50.229069   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:52.233340   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:54.728824   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:50.683740   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.683806   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.695528   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:51.154940   47297 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:51.154990   47297 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:51.155002   47297 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:51.155052   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:51.190293   47297 cri.go:89] found id: ""
	I0907 00:51:51.190351   47297 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:51.207237   47297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:51.216623   47297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:51.216671   47297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226376   47297 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226399   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.352763   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.879625   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.090367   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.169714   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.258757   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:52.258861   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.274881   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.799083   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.298600   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.798807   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.299419   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.798660   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.824175   47297 api_server.go:72] duration metric: took 2.565415526s to wait for apiserver process to appear ...
	I0907 00:51:54.824203   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:54.824222   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:55.335922   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336311   46354 main.go:141] libmachine: (old-k8s-version-940806) Found IP for machine: 192.168.83.245
	I0907 00:51:55.336325   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserving static IP address...
	I0907 00:51:55.336336   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has current primary IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336816   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.336872   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserved static IP address: 192.168.83.245
	I0907 00:51:55.336893   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | skip adding static IP to network mk-old-k8s-version-940806 - found existing host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"}
	I0907 00:51:55.336909   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting for SSH to be available...
	I0907 00:51:55.336919   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Getting to WaitForSSH function...
	I0907 00:51:55.339323   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.339768   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339880   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH client type: external
	I0907 00:51:55.339907   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa (-rw-------)
	I0907 00:51:55.339946   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:55.339964   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | About to run SSH command:
	I0907 00:51:55.340001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | exit 0
	I0907 00:51:55.483023   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:55.483362   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetConfigRaw
	I0907 00:51:55.484121   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.487091   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487590   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.487621   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487863   46354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:51:55.488067   46354 machine.go:88] provisioning docker machine ...
	I0907 00:51:55.488088   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:55.488332   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488525   46354 buildroot.go:166] provisioning hostname "old-k8s-version-940806"
	I0907 00:51:55.488551   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488707   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.491136   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491567   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.491600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491818   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.491950   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492058   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492133   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.492237   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.492685   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.492705   46354 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-940806 && echo "old-k8s-version-940806" | sudo tee /etc/hostname
	I0907 00:51:55.648589   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-940806
	
	I0907 00:51:55.648628   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.651624   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652046   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.652094   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652282   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.652472   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652654   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652813   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.652977   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.653628   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.653657   46354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-940806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-940806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-940806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:55.805542   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:55.805573   46354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:55.805607   46354 buildroot.go:174] setting up certificates
	I0907 00:51:55.805617   46354 provision.go:83] configureAuth start
	I0907 00:51:55.805629   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.805907   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.808800   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.809175   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809299   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.811385   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811785   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.811812   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811980   46354 provision.go:138] copyHostCerts
	I0907 00:51:55.812089   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:55.812104   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:55.812172   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:55.812287   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:55.812297   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:55.812321   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:55.812418   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:55.812427   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:55.812463   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:55.812538   46354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-940806 san=[192.168.83.245 192.168.83.245 localhost 127.0.0.1 minikube old-k8s-version-940806]
	I0907 00:51:55.920274   46354 provision.go:172] copyRemoteCerts
	I0907 00:51:55.920327   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:55.920348   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.923183   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923599   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.923632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923816   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.924011   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.924174   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.924335   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.020317   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:56.048299   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:51:56.075483   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:56.101118   46354 provision.go:86] duration metric: configureAuth took 295.488336ms
	I0907 00:51:56.101150   46354 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:56.101338   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:51:56.101407   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.104235   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.104640   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104878   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.105093   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105306   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105495   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.105668   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.106199   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.106217   46354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:56.435571   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:56.435644   46354 machine.go:91] provisioned docker machine in 947.562946ms
	I0907 00:51:56.435662   46354 start.go:300] post-start starting for "old-k8s-version-940806" (driver="kvm2")
	I0907 00:51:56.435679   46354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:56.435712   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.436041   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:56.436083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.439187   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439537   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.439563   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439888   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.440116   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.440285   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.440427   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.542162   46354 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:56.546357   46354 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:56.546375   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:56.546435   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:56.546511   46354 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:56.546648   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:56.556125   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:56.577844   46354 start.go:303] post-start completed in 142.166343ms
	I0907 00:51:56.577874   46354 fix.go:56] fixHost completed within 23.860860531s
	I0907 00:51:56.577898   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.580726   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581062   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.581090   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581221   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.581540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581742   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.582113   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.582532   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.582553   46354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:56.715584   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047916.695896692
	
	I0907 00:51:56.715607   46354 fix.go:206] guest clock: 1694047916.695896692
	I0907 00:51:56.715615   46354 fix.go:219] Guest: 2023-09-07 00:51:56.695896692 +0000 UTC Remote: 2023-09-07 00:51:56.57787864 +0000 UTC m=+363.381197654 (delta=118.018052ms)
	I0907 00:51:56.715632   46354 fix.go:190] guest clock delta is within tolerance: 118.018052ms
	I0907 00:51:56.715639   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 23.998669865s
	I0907 00:51:56.715658   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.715909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:56.718637   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.718992   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.719030   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.719203   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719646   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719852   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719935   46354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:56.719980   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.720050   46354 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:56.720068   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.722463   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722752   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722809   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.722850   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723041   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723208   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723241   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.723282   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723394   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723406   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723599   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.723632   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723797   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723956   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.835700   46354 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:56.841554   46354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:56.988658   46354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:56.995421   46354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:56.995495   46354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:57.011588   46354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:57.011608   46354 start.go:466] detecting cgroup driver to use...
	I0907 00:51:57.011669   46354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:57.029889   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:57.043942   46354 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:57.044002   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:57.056653   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:57.069205   46354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:57.184510   46354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:57.323399   46354 docker.go:212] disabling docker service ...
	I0907 00:51:57.323477   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:57.336506   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:57.348657   46354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:57.464450   46354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:57.577763   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:57.590934   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:57.609445   46354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:51:57.609500   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.619112   46354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:57.619173   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.629272   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.638702   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.648720   46354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:57.659046   46354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:57.667895   46354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:57.667971   46354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:57.681673   46354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:57.690907   46354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:57.801113   46354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:57.978349   46354 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:57.978432   46354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:57.983665   46354 start.go:534] Will wait 60s for crictl version
	I0907 00:51:57.983714   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:51:57.988244   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:58.019548   46354 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:58.019616   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.068229   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.118554   46354 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0907 00:51:58.120322   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:58.122944   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123321   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:58.123377   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123569   46354 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:58.128115   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:58.140862   46354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0907 00:51:58.140933   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:58.182745   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:51:58.182829   46354 ssh_runner.go:195] Run: which lz4
	I0907 00:51:58.188491   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:58.193202   46354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:58.193237   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0907 00:51:55.862451   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.363582   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.511655   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.511686   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:58.511699   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:58.549405   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.549442   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:59.050120   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.057915   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.057946   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:59.550150   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.559928   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.559970   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:52:00.050535   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:52:00.060556   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:52:00.069872   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:52:00.069898   47297 api_server.go:131] duration metric: took 5.245689478s to wait for apiserver health ...
	I0907 00:52:00.069906   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:52:00.069911   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:00.071700   47297 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:56.730172   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.731973   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:00.073858   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:00.098341   47297 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:00.120355   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:00.137820   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:52:00.137936   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:52:00.137967   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:52:00.137989   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:52:00.138007   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:52:00.138018   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:52:00.138032   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:52:00.138045   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:52:00.138058   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:52:00.138069   47297 system_pods.go:74] duration metric: took 17.695163ms to wait for pod list to return data ...
	I0907 00:52:00.138082   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:00.145755   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:00.145790   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:00.145803   47297 node_conditions.go:105] duration metric: took 7.711411ms to run NodePressure ...
	I0907 00:52:00.145825   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:00.468823   47297 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476107   47297 kubeadm.go:787] kubelet initialised
	I0907 00:52:00.476130   47297 kubeadm.go:788] duration metric: took 7.282541ms waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476138   47297 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:00.483366   47297 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.495045   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495072   47297 pod_ready.go:81] duration metric: took 11.633116ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.495083   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495092   47297 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.500465   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500488   47297 pod_ready.go:81] duration metric: took 5.386997ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.500498   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500504   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.507318   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507392   47297 pod_ready.go:81] duration metric: took 6.878563ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.507416   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507436   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.527784   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527820   47297 pod_ready.go:81] duration metric: took 20.36412ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.527833   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527844   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.936895   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936926   47297 pod_ready.go:81] duration metric: took 409.073374ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.936938   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936947   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.325746   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325777   47297 pod_ready.go:81] duration metric: took 388.819699ms waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.325787   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325798   47297 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.725791   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725828   47297 pod_ready.go:81] duration metric: took 400.019773ms waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.725840   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725852   47297 pod_ready.go:38] duration metric: took 1.249702286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:01.725871   47297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:52:01.742792   47297 ops.go:34] apiserver oom_adj: -16
	I0907 00:52:01.742816   47297 kubeadm.go:640] restartCluster took 20.616616394s
	I0907 00:52:01.742825   47297 kubeadm.go:406] StartCluster complete in 20.674170679s
	I0907 00:52:01.742843   47297 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.742936   47297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:52:01.744735   47297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.744998   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:52:01.745113   47297 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:52:01.745212   47297 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745218   47297 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745232   47297 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745240   47297 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:52:01.745232   47297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-773466"
	I0907 00:52:01.745268   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:52:01.745301   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745248   47297 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745432   47297 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745442   47297 addons.go:240] addon metrics-server should already be in state true
	I0907 00:52:01.745489   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745709   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745718   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745753   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745813   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745895   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745930   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.755156   47297 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-773466" context rescaled to 1 replicas
	I0907 00:52:01.755193   47297 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:52:01.757452   47297 out.go:177] * Verifying Kubernetes components...
	I0907 00:52:01.759076   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:52:01.763067   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0907 00:52:01.763578   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.764125   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.764147   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.764483   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.764668   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.764804   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0907 00:52:01.765385   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.765972   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.765988   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.766336   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.768468   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0907 00:52:01.768952   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.768985   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.769339   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.769827   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.769860   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.770129   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.770612   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.770641   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.782323   47297 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.782353   47297 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:52:01.782387   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.782822   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.782858   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.788535   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0907 00:52:01.789169   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.789826   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.789845   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.790158   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0907 00:52:01.790340   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.790544   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.790616   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.791036   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.791055   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.791552   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.791726   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.793270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.796517   47297 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:52:01.794011   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.798239   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:52:01.798266   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:52:01.798291   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800176   47297 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:59.928894   46354 crio.go:444] Took 1.740438 seconds to copy over tarball
	I0907 00:51:59.928974   46354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:52:03.105945   46354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.176929999s)
	I0907 00:52:03.105977   46354 crio.go:451] Took 3.177055 seconds to extract the tarball
	I0907 00:52:03.105987   46354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:52:03.150092   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:52:03.193423   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:52:03.193450   46354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:52:03.193525   46354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.193544   46354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.193564   46354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.193730   46354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.193799   46354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.193802   46354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:52:03.193829   46354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.193736   46354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.194948   46354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.195017   46354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.194949   46354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.195642   46354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.195763   46354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.195814   46354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.195843   46354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:52:03.195874   46354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:01.801952   47297 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.801969   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:52:01.801989   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800897   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0907 00:52:01.801662   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802261   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.802286   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802332   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.802683   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.802922   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.802961   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.803124   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.804246   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.804272   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.804654   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.804870   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805283   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.805314   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805418   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.805448   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.805541   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.805723   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.805889   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.806052   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.822423   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0907 00:52:01.822847   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.823441   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.823459   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.823843   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.824036   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.825740   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.826032   47297 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:01.826051   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:52:01.826076   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.829041   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829284   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.829310   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829407   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.829591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.829712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.830194   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.956646   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:52:01.956669   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:52:01.974183   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.978309   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:02.048672   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:52:02.048708   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:52:02.088069   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:02.088099   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:52:02.142271   47297 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:02.142668   47297 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:52:02.197788   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:03.587076   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.612851341s)
	I0907 00:52:03.587130   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587147   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608805294s)
	I0907 00:52:03.587182   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587210   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587452   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587493   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587514   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587525   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587535   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587751   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587765   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587892   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587905   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587925   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587935   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588252   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.588277   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588285   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.588297   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.588305   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588543   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588555   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648373   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450538249s)
	I0907 00:52:03.648433   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648449   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.648789   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.648824   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.648833   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648848   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648858   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.649118   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.649137   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.649153   47297 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-773466"
	I0907 00:52:03.834785   47297 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:52:00.858996   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:02.861983   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:01.228807   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:03.229017   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:04.154749   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:04.260530   47297 addons.go:502] enable addons completed in 2.51536834s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:52:03.398538   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.480702   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.482201   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.482206   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0907 00:52:03.482815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.484155   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.484815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.698892   46354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0907 00:52:03.698936   46354 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.698938   46354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0907 00:52:03.698965   46354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0907 00:52:03.699028   46354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.698975   46354 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0907 00:52:03.698982   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699069   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699084   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.703734   46354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0907 00:52:03.703764   46354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.703796   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729259   46354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0907 00:52:03.729295   46354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.729331   46354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0907 00:52:03.729366   46354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.729373   46354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0907 00:52:03.729394   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.729398   46354 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.729404   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729336   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729441   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729491   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.729519   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0907 00:52:03.729601   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.791169   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0907 00:52:03.814632   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0907 00:52:03.814660   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.814689   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.814747   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:52:03.814799   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.814839   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0907 00:52:03.814841   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876039   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0907 00:52:03.876095   46354 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0907 00:52:03.876082   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0907 00:52:03.876114   46354 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876153   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0907 00:52:03.876158   46354 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0907 00:52:04.549426   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:05.733437   46354 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.85724297s)
	I0907 00:52:05.733479   46354 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0907 00:52:05.733519   46354 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.184052604s)
	I0907 00:52:05.733568   46354 cache_images.go:92] LoadImages completed in 2.540103614s
	W0907 00:52:05.733639   46354 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0907 00:52:05.733723   46354 ssh_runner.go:195] Run: crio config
	I0907 00:52:05.795752   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:05.795780   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:05.795801   46354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:52:05.795824   46354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-940806 NodeName:old-k8s-version-940806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0907 00:52:05.795975   46354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-940806"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-940806
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.245:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:52:05.796074   46354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-940806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:52:05.796135   46354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0907 00:52:05.807772   46354 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:52:05.807864   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:52:05.818185   46354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0907 00:52:05.835526   46354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:52:05.853219   46354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0907 00:52:05.873248   46354 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I0907 00:52:05.877640   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:52:05.890975   46354 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806 for IP: 192.168.83.245
	I0907 00:52:05.891009   46354 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:05.891171   46354 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:52:05.891226   46354 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:52:05.891327   46354 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.key
	I0907 00:52:05.891407   46354 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key.8de8e89b
	I0907 00:52:05.891459   46354 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key
	I0907 00:52:05.891667   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:52:05.891713   46354 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:52:05.891729   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:52:05.891766   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:52:05.891801   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:52:05.891836   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:52:05.891913   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:52:05.892547   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:52:05.917196   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:52:05.942387   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:52:05.965551   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:52:05.987658   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:52:06.012449   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:52:06.037055   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:52:06.061051   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:52:06.085002   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:52:06.109132   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:52:06.132091   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:52:06.155215   46354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:52:06.173122   46354 ssh_runner.go:195] Run: openssl version
	I0907 00:52:06.178736   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:52:06.189991   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194548   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194596   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.200538   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:52:06.212151   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:52:06.224356   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.229976   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.230037   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.236389   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:52:06.248369   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:52:06.259325   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264451   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264514   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.270564   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:52:06.282506   46354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:52:06.287280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:52:06.293280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:52:06.299272   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:52:06.305342   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:52:06.311194   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:52:06.317634   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:52:06.323437   46354 kubeadm.go:404] StartCluster: {Name:old-k8s-version-940806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:52:06.323591   46354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:52:06.323668   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:06.358285   46354 cri.go:89] found id: ""
	I0907 00:52:06.358357   46354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:52:06.368975   46354 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:52:06.368997   46354 kubeadm.go:636] restartCluster start
	I0907 00:52:06.369060   46354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:52:06.379841   46354 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.380906   46354 kubeconfig.go:92] found "old-k8s-version-940806" server: "https://192.168.83.245:8443"
	I0907 00:52:06.383428   46354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:52:06.393862   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.393912   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.406922   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.406947   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.406995   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.419930   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.920685   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.920763   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.934327   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.420551   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.420652   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.438377   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.920500   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.920598   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.936835   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:05.363807   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.869141   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:05.229666   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.729895   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:09.731464   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:06.656552   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:09.155326   47297 node_ready.go:49] node "default-k8s-diff-port-773466" has status "Ready":"True"
	I0907 00:52:09.155347   47297 node_ready.go:38] duration metric: took 7.013040488s waiting for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:09.155355   47297 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:09.164225   47297 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170406   47297 pod_ready.go:92] pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.170437   47297 pod_ready.go:81] duration metric: took 6.189088ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170450   47297 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178363   47297 pod_ready.go:92] pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.178390   47297 pod_ready.go:81] duration metric: took 7.932283ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178403   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184875   47297 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.184891   47297 pod_ready.go:81] duration metric: took 6.482032ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184900   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192246   47297 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.192265   47297 pod_ready.go:81] duration metric: took 7.359919ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192274   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556032   47297 pod_ready.go:92] pod "kube-proxy-5bh7n" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.556064   47297 pod_ready.go:81] duration metric: took 363.783194ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556077   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:08.420749   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.420813   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.434111   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:08.920795   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.920891   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.934515   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.420076   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.420167   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.433668   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.920090   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.920185   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.934602   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.420086   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.420186   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.434617   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.920124   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.920196   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.933372   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.420990   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.421072   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.435087   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.920579   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.920653   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.933614   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.420100   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.420192   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.434919   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.920816   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.920911   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.934364   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.357508   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.357966   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.358965   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.227826   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.228106   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:11.862581   47297 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.363573   47297 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:12.363593   47297 pod_ready.go:81] duration metric: took 2.807509276s waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:12.363602   47297 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:14.763624   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:13.420355   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.420427   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.434047   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:13.920675   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.920757   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.933725   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.420169   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.420244   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.433012   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.920490   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.920603   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.934208   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.420724   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.420807   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.433542   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.920040   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.920114   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.933104   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:16.394845   46354 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:52:16.394878   46354 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:52:16.394891   46354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:52:16.394939   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:16.430965   46354 cri.go:89] found id: ""
	I0907 00:52:16.431029   46354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:52:16.449241   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:52:16.459891   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:52:16.459973   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470006   46354 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470033   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:16.591111   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.262647   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.481491   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.601432   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.722907   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:52:17.723000   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:17.735327   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:16.360886   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.860619   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:16.230019   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.230274   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:17.262772   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:19.264986   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.254002   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:18.753686   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.253956   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.290590   46354 api_server.go:72] duration metric: took 1.567681708s to wait for apiserver process to appear ...
	I0907 00:52:19.290614   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:52:19.290632   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291177   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.291217   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291691   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.792323   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:21.357716   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:23.358355   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:20.728569   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:22.730042   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:21.763571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.264990   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.793514   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0907 00:52:24.793568   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:24.939397   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:52:24.939429   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:52:25.292624   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.350968   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.351004   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:25.792573   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.799666   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.799697   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:26.292258   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:26.301200   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:52:26.313982   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:52:26.314007   46354 api_server.go:131] duration metric: took 7.023387143s to wait for apiserver health ...
	I0907 00:52:26.314016   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:26.314021   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:26.316011   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:52:26.317496   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:26.335726   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:26.373988   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:26.393836   46354 system_pods.go:59] 7 kube-system pods found
	I0907 00:52:26.393861   46354 system_pods.go:61] "coredns-5644d7b6d9-56l68" [ab956d84-2998-42a4-b9ed-b71bc43c9730] Running
	I0907 00:52:26.393866   46354 system_pods.go:61] "etcd-old-k8s-version-940806" [6234bc4e-66d0-4fb6-8631-b45ee56b774c] Running
	I0907 00:52:26.393870   46354 system_pods.go:61] "kube-apiserver-old-k8s-version-940806" [303d2368-1964-4bdb-9d46-91602d6c52b4] Running
	I0907 00:52:26.393875   46354 system_pods.go:61] "kube-controller-manager-old-k8s-version-940806" [7a193f1e-8650-453b-bfa5-d4af3a8bfbc3] Running
	I0907 00:52:26.393878   46354 system_pods.go:61] "kube-proxy-2d8pb" [1689f3e9-0487-422e-a450-9c96595cea00] Running
	I0907 00:52:26.393882   46354 system_pods.go:61] "kube-scheduler-old-k8s-version-940806" [cbd69cd2-3fc6-418b-aa4f-ef19b1b903e1] Running
	I0907 00:52:26.393886   46354 system_pods.go:61] "storage-provisioner" [f313e63f-6c39-4b81-86d1-8054fd6af338] Running
	I0907 00:52:26.393891   46354 system_pods.go:74] duration metric: took 19.879283ms to wait for pod list to return data ...
	I0907 00:52:26.393900   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:26.401474   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:26.401502   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:26.401512   46354 node_conditions.go:105] duration metric: took 7.606706ms to run NodePressure ...
	I0907 00:52:26.401529   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:26.811645   46354 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:26.817493   46354 retry.go:31] will retry after 177.884133ms: kubelet not initialised
	I0907 00:52:26.999917   46354 retry.go:31] will retry after 499.371742ms: kubelet not initialised
	I0907 00:52:27.504386   46354 retry.go:31] will retry after 692.030349ms: kubelet not initialised
	I0907 00:52:28.201498   46354 retry.go:31] will retry after 627.806419ms: kubelet not initialised
	I0907 00:52:25.358575   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.860612   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:25.229134   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.230538   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.729637   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:26.764040   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.264855   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:28.841483   46354 retry.go:31] will retry after 1.816521725s: kubelet not initialised
	I0907 00:52:30.664615   46354 retry.go:31] will retry after 1.888537042s: kubelet not initialised
	I0907 00:52:32.559591   46354 retry.go:31] will retry after 1.787314239s: kubelet not initialised
	I0907 00:52:30.358330   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.857719   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.229103   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.229797   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:31.265047   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:33.763354   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.353206   46354 retry.go:31] will retry after 5.20863166s: kubelet not initialised
	I0907 00:52:34.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:37.358005   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.229978   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.728934   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.264389   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.762232   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:39.567124   46354 retry.go:31] will retry after 8.04288108s: kubelet not initialised
	I0907 00:52:39.863004   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:42.359394   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.729770   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.236530   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.762994   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.263094   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.264328   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.616011   46354 retry.go:31] will retry after 4.959306281s: kubelet not initialised
	I0907 00:52:44.858665   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.359722   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.729067   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:48.228533   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.763985   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.263571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.580975   46354 retry.go:31] will retry after 19.653399141s: kubelet not initialised
	I0907 00:52:49.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.360050   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.361428   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.229168   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.229310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.229581   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.263685   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.762390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.857835   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.357322   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.728575   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.228623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.762553   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.263070   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.357560   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.358151   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.228910   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.728870   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.264341   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.764046   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.858279   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:07.861484   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.729314   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.229765   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:06.263532   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.763318   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.241966   46354 kubeadm.go:787] kubelet initialised
	I0907 00:53:12.242006   46354 kubeadm.go:788] duration metric: took 45.430332167s waiting for restarted kubelet to initialise ...
	I0907 00:53:12.242016   46354 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:53:12.247545   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253242   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.253264   46354 pod_ready.go:81] duration metric: took 5.697075ms waiting for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253276   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258467   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.258489   46354 pod_ready.go:81] duration metric: took 5.206456ms waiting for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258497   46354 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264371   46354 pod_ready.go:92] pod "etcd-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.264394   46354 pod_ready.go:81] duration metric: took 5.89143ms waiting for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264406   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269447   46354 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.269467   46354 pod_ready.go:81] duration metric: took 5.053466ms waiting for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269481   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638374   46354 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.638400   46354 pod_ready.go:81] duration metric: took 368.911592ms waiting for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638413   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039158   46354 pod_ready.go:92] pod "kube-proxy-2d8pb" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.039183   46354 pod_ready.go:81] duration metric: took 400.763103ms waiting for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039191   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:10.359605   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.361679   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:10.729293   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.229130   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:11.263595   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.268640   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.439450   46354 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.439477   46354 pod_ready.go:81] duration metric: took 400.279988ms waiting for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.439486   46354 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:15.746303   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.747193   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:14.858056   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:16.860373   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:19.361777   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.730623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:18.229790   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.763744   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.262360   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.246964   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.746507   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:21.361826   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.857891   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.729313   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.228479   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.263551   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:24.762509   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.246087   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:27.745946   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.858658   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.361105   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.732342   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.229971   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:26.763684   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.262971   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.746043   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.746133   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.857617   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.860863   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.728633   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.730094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.264742   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.764483   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.748648   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.246158   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.358908   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.361998   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.229141   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.729367   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.263505   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.264633   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.746190   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.751934   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:39.858993   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:41.860052   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.359421   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.228491   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:42.229143   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.229996   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.766539   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.264325   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.245475   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.245574   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.246524   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.857876   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.859569   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.230037   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.727940   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.763110   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.763211   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.264727   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:49.745339   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:51.746054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.859934   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:53.357432   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.729449   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.729731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.731191   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.763145   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.763847   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.246469   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.746034   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:55.357937   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.856743   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.227742   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.228654   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.764030   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.765416   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.746909   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.246396   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:02.357694   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:04.357907   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.229565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.729229   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.263126   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.764100   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.745703   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:05.745994   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.858308   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:09.357561   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.229604   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.727738   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.262721   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.263088   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.264022   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.246673   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.246999   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.746105   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:11.358384   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:13.358491   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.729593   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.732429   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.762306   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.263152   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:14.746491   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.245728   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.361153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.860338   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.229785   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.730926   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.733515   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.763593   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.264199   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.247271   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:21.251269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.360652   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.860291   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.229545   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.729109   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.264956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.764699   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:23.746737   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.747269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.357166   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.358248   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:26.729136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.226834   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.262945   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.763714   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:28.245784   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:30.245932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.745051   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.357600   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.361871   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:31.227731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:33.727721   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.262586   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.263485   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.745803   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.745877   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.858000   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.859206   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:35.729469   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.227947   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.763348   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.763533   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:39.245567   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.246549   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.859969   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.862293   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.228842   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.230064   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:44.732421   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.263587   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.762536   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.746104   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:46.247106   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.358648   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.858022   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.229847   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:49.729764   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.763352   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.263554   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.745911   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.746370   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.357129   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.357416   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.359626   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.228487   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.728565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.762919   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.764740   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.262939   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:53.248337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.746300   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.858127   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.358102   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.730045   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.227094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:57.263059   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.263696   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:58.247342   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:00.745494   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:02.748481   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.360153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.360737   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.227937   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.235852   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.263956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.246551   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.747587   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.858981   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.861146   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.729711   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.228310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.764163   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.263381   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.263936   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.247504   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.745798   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.360810   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.859446   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.229240   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.728782   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.729856   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.763565   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.263530   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.746534   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.246569   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.356953   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.358790   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:16.732983   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.228136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.264573   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.763137   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.745008   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.745932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.858109   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:22.358258   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.228589   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.729147   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.763580   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.746337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.748262   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:24.860943   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.357823   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.729423   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.731209   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.764235   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.263390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.254786   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.746056   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:29.859827   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:31.861387   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.862627   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.227830   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.227911   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:34.728680   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.762895   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.763333   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.262940   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.247352   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.247638   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.747011   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:36.356562   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:38.358379   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.227942   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.230445   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.264134   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.763848   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.245726   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.246951   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.858763   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.859176   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:41.729215   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.228235   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.263784   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.762310   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.747834   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:46.748669   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.361972   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:47.861601   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.453504   46768 pod_ready.go:81] duration metric: took 4m0.000384981s waiting for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:45.453536   46768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:45.453557   46768 pod_ready.go:38] duration metric: took 4m14.103603262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:45.453586   46768 kubeadm.go:640] restartCluster took 4m33.861797616s
	W0907 00:55:45.453681   46768 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:55:45.453721   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:55:46.762627   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:48.764174   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:49.247771   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:51.747171   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:50.361591   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:52.362641   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.550366   46833 pod_ready.go:81] duration metric: took 4m0.000125687s waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:53.550409   46833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:53.550421   46833 pod_ready.go:38] duration metric: took 4m5.601345022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:53.550444   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:55:53.550477   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:53.550553   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:53.601802   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:53.601823   46833 cri.go:89] found id: ""
	I0907 00:55:53.601831   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:53.601892   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.606465   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:53.606555   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:53.643479   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.643509   46833 cri.go:89] found id: ""
	I0907 00:55:53.643516   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:53.643562   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.648049   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:53.648101   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:53.679620   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:53.679648   46833 cri.go:89] found id: ""
	I0907 00:55:53.679658   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:53.679706   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.684665   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:53.684721   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:53.725282   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.725302   46833 cri.go:89] found id: ""
	I0907 00:55:53.725309   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:53.725364   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.729555   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:53.729627   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:53.761846   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:53.761875   46833 cri.go:89] found id: ""
	I0907 00:55:53.761883   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:53.761930   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.766451   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:53.766523   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:53.800099   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:53.800118   46833 cri.go:89] found id: ""
	I0907 00:55:53.800124   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:53.800168   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.804614   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:53.804676   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:53.841198   46833 cri.go:89] found id: ""
	I0907 00:55:53.841219   46833 logs.go:284] 0 containers: []
	W0907 00:55:53.841225   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:53.841230   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:53.841288   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:53.883044   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:53.883071   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:53.883077   46833 cri.go:89] found id: ""
	I0907 00:55:53.883085   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:53.883133   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.887172   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.891540   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:53.891566   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.944734   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:53.944765   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.979803   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:53.979832   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:54.015131   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:54.015159   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:54.062445   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:54.062478   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:54.097313   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:54.097343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:54.685400   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:54.685442   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:51.262853   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.764766   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.248875   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:56.746538   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.836523   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:54.836555   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:54.885972   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:54.886002   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:54.918966   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:54.919000   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:54.951966   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:54.951996   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:54.991382   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:54.991418   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:55.048526   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:55.048561   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:57.564574   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:55:57.579844   46833 api_server.go:72] duration metric: took 4m15.68090954s to wait for apiserver process to appear ...
	I0907 00:55:57.579867   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:55:57.579899   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:57.579963   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:57.619205   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:57.619225   46833 cri.go:89] found id: ""
	I0907 00:55:57.619235   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:57.619287   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.623884   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:57.623962   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:57.653873   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:57.653899   46833 cri.go:89] found id: ""
	I0907 00:55:57.653907   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:57.653967   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.658155   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:57.658219   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:57.688169   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:57.688195   46833 cri.go:89] found id: ""
	I0907 00:55:57.688203   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:57.688256   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.692208   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:57.692274   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:57.722477   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:57.722498   46833 cri.go:89] found id: ""
	I0907 00:55:57.722505   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:57.722548   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.726875   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:57.726926   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:57.768681   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:57.768709   46833 cri.go:89] found id: ""
	I0907 00:55:57.768718   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:57.768768   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.773562   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:57.773654   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:57.806133   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:57.806158   46833 cri.go:89] found id: ""
	I0907 00:55:57.806166   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:57.806222   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.810401   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:57.810446   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:57.840346   46833 cri.go:89] found id: ""
	I0907 00:55:57.840371   46833 logs.go:284] 0 containers: []
	W0907 00:55:57.840379   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:57.840384   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:57.840435   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:57.869978   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:57.869998   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:57.870002   46833 cri.go:89] found id: ""
	I0907 00:55:57.870008   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:57.870052   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.874945   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.878942   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:57.878964   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:58.015009   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:58.015035   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:58.063331   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:58.063365   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:58.098316   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:58.098343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:58.140312   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:58.140342   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:58.170471   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:58.170499   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:58.217775   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:58.217804   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:58.275681   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:58.275717   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:58.323629   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:58.323663   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:58.360608   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:58.360636   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:58.397158   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:58.397193   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:58.435395   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:58.435425   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:59.023632   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:59.023687   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:55.767692   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:58.262808   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:00.263787   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:59.246042   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.746441   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.540667   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:56:01.548176   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:56:01.549418   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:01.549443   46833 api_server.go:131] duration metric: took 3.969568684s to wait for apiserver health ...
	I0907 00:56:01.549451   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:01.549474   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:01.549546   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:01.579945   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:01.579975   46833 cri.go:89] found id: ""
	I0907 00:56:01.579985   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:56:01.580038   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.584609   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:01.584673   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:01.628626   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:01.628647   46833 cri.go:89] found id: ""
	I0907 00:56:01.628656   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:56:01.628711   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.633293   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:01.633362   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:01.663898   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.663923   46833 cri.go:89] found id: ""
	I0907 00:56:01.663932   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:56:01.663994   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.668130   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:01.668198   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:01.699021   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.699045   46833 cri.go:89] found id: ""
	I0907 00:56:01.699055   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:56:01.699107   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.703470   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:01.703536   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:01.740360   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:01.740387   46833 cri.go:89] found id: ""
	I0907 00:56:01.740396   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:56:01.740450   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.747366   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:01.747445   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:01.783175   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.783218   46833 cri.go:89] found id: ""
	I0907 00:56:01.783226   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:56:01.783267   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.787565   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:01.787628   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:01.822700   46833 cri.go:89] found id: ""
	I0907 00:56:01.822730   46833 logs.go:284] 0 containers: []
	W0907 00:56:01.822740   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:01.822747   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:01.822818   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:01.853909   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:01.853934   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:01.853938   46833 cri.go:89] found id: ""
	I0907 00:56:01.853945   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:56:01.853990   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.858209   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.862034   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:56:01.862053   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.902881   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:56:01.902915   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.937846   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:56:01.937882   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.993495   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:56:01.993526   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:02.029773   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:56:02.029810   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:02.076180   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:02.076210   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:02.133234   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:02.133268   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:02.278183   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:56:02.278209   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:02.325096   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:56:02.325125   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:02.362517   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:56:02.362542   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:02.393393   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:02.393430   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:02.950480   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:02.950521   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:02.967628   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:56:02.967658   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:05.533216   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:05.533249   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.533257   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.533264   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.533271   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.533276   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.533283   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.533292   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.533305   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.533315   46833 system_pods.go:74] duration metric: took 3.983859289s to wait for pod list to return data ...
	I0907 00:56:05.533327   46833 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:05.536806   46833 default_sa.go:45] found service account: "default"
	I0907 00:56:05.536833   46833 default_sa.go:55] duration metric: took 3.496147ms for default service account to be created ...
	I0907 00:56:05.536842   46833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:05.543284   46833 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:05.543310   46833 system_pods.go:89] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.543318   46833 system_pods.go:89] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.543325   46833 system_pods.go:89] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.543332   46833 system_pods.go:89] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.543337   46833 system_pods.go:89] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.543344   46833 system_pods.go:89] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.543355   46833 system_pods.go:89] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.543367   46833 system_pods.go:89] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.543377   46833 system_pods.go:126] duration metric: took 6.528914ms to wait for k8s-apps to be running ...
	I0907 00:56:05.543391   46833 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:05.543437   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:05.559581   46833 system_svc.go:56] duration metric: took 16.174514ms WaitForService to wait for kubelet.
	I0907 00:56:05.559613   46833 kubeadm.go:581] duration metric: took 4m23.660681176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:05.559638   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:05.564521   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:05.564552   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:05.564566   46833 node_conditions.go:105] duration metric: took 4.922449ms to run NodePressure ...
	I0907 00:56:05.564579   46833 start.go:228] waiting for startup goroutines ...
	I0907 00:56:05.564589   46833 start.go:233] waiting for cluster config update ...
	I0907 00:56:05.564609   46833 start.go:242] writing updated cluster config ...
	I0907 00:56:05.564968   46833 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:05.618906   46833 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:05.620461   46833 out.go:177] * Done! kubectl is now configured to use "embed-certs-546209" cluster and "default" namespace by default
	I0907 00:56:02.763702   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:05.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:04.246390   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:06.246925   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:07.762598   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:09.764581   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:08.746379   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:11.246764   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.263747   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.364712   47297 pod_ready.go:81] duration metric: took 4m0.00109115s waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:56:12.364763   47297 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:56:12.364776   47297 pod_ready.go:38] duration metric: took 4m3.209409487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:12.364799   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:12.364833   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:12.364891   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:12.416735   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:12.416760   47297 cri.go:89] found id: ""
	I0907 00:56:12.416767   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:12.416818   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.423778   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:12.423849   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:12.465058   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.465086   47297 cri.go:89] found id: ""
	I0907 00:56:12.465095   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:12.465152   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.471730   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:12.471793   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:12.508984   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.509005   47297 cri.go:89] found id: ""
	I0907 00:56:12.509017   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:12.509073   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.513689   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:12.513745   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:12.550233   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:12.550257   47297 cri.go:89] found id: ""
	I0907 00:56:12.550266   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:12.550325   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.556588   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:12.556665   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:12.598826   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:12.598853   47297 cri.go:89] found id: ""
	I0907 00:56:12.598862   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:12.598913   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.603710   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:12.603778   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:12.645139   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:12.645169   47297 cri.go:89] found id: ""
	I0907 00:56:12.645179   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:12.645236   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.650685   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:12.650755   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:12.686256   47297 cri.go:89] found id: ""
	I0907 00:56:12.686284   47297 logs.go:284] 0 containers: []
	W0907 00:56:12.686291   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:12.686297   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:12.686349   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:12.719614   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.719638   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:12.719645   47297 cri.go:89] found id: ""
	I0907 00:56:12.719655   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:12.719713   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.724842   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.728880   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:12.728899   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.771051   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:12.771081   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.812110   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:12.812140   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.847819   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:12.847845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:13.436674   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:13.436711   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:13.454385   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:13.454425   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:13.617809   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:13.617838   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:13.652209   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:13.652239   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:13.683939   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:13.683977   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:13.730116   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:13.730151   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:13.763253   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:13.763278   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:13.804890   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:13.804918   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:13.861822   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:13.861856   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.242461   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.788701806s)
	I0907 00:56:17.242546   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:17.259241   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:56:17.268943   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:56:17.278094   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:56:17.278138   46768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:56:17.342868   46768 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:56:17.342981   46768 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:56:17.519943   46768 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:56:17.520089   46768 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:56:17.520214   46768 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:56:17.714902   46768 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:56:13.247487   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:15.746162   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.748049   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.716739   46768 out.go:204]   - Generating certificates and keys ...
	I0907 00:56:17.716894   46768 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:56:17.717007   46768 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:56:17.717113   46768 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:56:17.717361   46768 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:56:17.717892   46768 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:56:17.718821   46768 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:56:17.719502   46768 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:56:17.719996   46768 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:56:17.720644   46768 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:56:17.721254   46768 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:56:17.721832   46768 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:56:17.721911   46768 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:56:17.959453   46768 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:56:18.029012   46768 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:56:18.146402   46768 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:56:18.309148   46768 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:56:18.309726   46768 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:56:18.312628   46768 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:56:18.315593   46768 out.go:204]   - Booting up control plane ...
	I0907 00:56:18.315744   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:56:18.315870   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:56:18.317157   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:56:18.336536   46768 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:56:18.336947   46768 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:56:18.337042   46768 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:56:18.472759   46768 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:56:16.415279   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:16.431021   47297 api_server.go:72] duration metric: took 4m14.6757965s to wait for apiserver process to appear ...
	I0907 00:56:16.431047   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:16.431086   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:16.431144   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:16.474048   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:16.474075   47297 cri.go:89] found id: ""
	I0907 00:56:16.474085   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:16.474141   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.478873   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:16.478956   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:16.512799   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.512817   47297 cri.go:89] found id: ""
	I0907 00:56:16.512824   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:16.512880   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.518717   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:16.518812   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:16.553996   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:16.554016   47297 cri.go:89] found id: ""
	I0907 00:56:16.554023   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:16.554066   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.559358   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:16.559422   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:16.598717   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:16.598739   47297 cri.go:89] found id: ""
	I0907 00:56:16.598746   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:16.598821   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.603704   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:16.603766   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:16.646900   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:16.646928   47297 cri.go:89] found id: ""
	I0907 00:56:16.646937   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:16.646995   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.651216   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:16.651287   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:16.681334   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:16.681361   47297 cri.go:89] found id: ""
	I0907 00:56:16.681374   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:16.681429   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.685963   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:16.686028   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:16.720214   47297 cri.go:89] found id: ""
	I0907 00:56:16.720243   47297 logs.go:284] 0 containers: []
	W0907 00:56:16.720253   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:16.720259   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:16.720316   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:16.756411   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:16.756437   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:16.756444   47297 cri.go:89] found id: ""
	I0907 00:56:16.756452   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:16.756512   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.762211   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.767635   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:16.767659   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:16.784092   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:16.784122   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:16.936817   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:16.936845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.979426   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:16.979455   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:17.009878   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:17.009912   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:17.048086   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:17.048113   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:17.103114   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:17.103156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:17.139125   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:17.139163   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:17.181560   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:17.181588   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:17.224815   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:17.224841   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:17.299438   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:17.299474   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.355165   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:17.355197   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:17.403781   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:17.403809   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:20.491060   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:56:20.498573   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:56:20.501753   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:20.501774   47297 api_server.go:131] duration metric: took 4.070720466s to wait for apiserver health ...
	I0907 00:56:20.501782   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:20.501807   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:20.501856   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:20.545524   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:20.545550   47297 cri.go:89] found id: ""
	I0907 00:56:20.545560   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:20.545616   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.552051   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:20.552120   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:20.593019   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:20.593041   47297 cri.go:89] found id: ""
	I0907 00:56:20.593049   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:20.593104   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.598430   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:20.598500   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:20.639380   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:20.639407   47297 cri.go:89] found id: ""
	I0907 00:56:20.639417   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:20.639507   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.645270   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:20.645342   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:20.247030   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:22.247132   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:20.684338   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:20.684368   47297 cri.go:89] found id: ""
	I0907 00:56:20.684378   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:20.684438   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.689465   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:20.689528   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:20.727854   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.727879   47297 cri.go:89] found id: ""
	I0907 00:56:20.727887   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:20.727938   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.733320   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:20.733389   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:20.776584   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:20.776607   47297 cri.go:89] found id: ""
	I0907 00:56:20.776614   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:20.776659   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.781745   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:20.781822   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:20.817720   47297 cri.go:89] found id: ""
	I0907 00:56:20.817746   47297 logs.go:284] 0 containers: []
	W0907 00:56:20.817756   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:20.817763   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:20.817819   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:20.857693   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.857716   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.857723   47297 cri.go:89] found id: ""
	I0907 00:56:20.857732   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:20.857788   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.862242   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.866469   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:20.866489   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.907476   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:20.907514   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.946383   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:20.946418   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.983830   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:20.983858   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:21.572473   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:21.572524   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:21.626465   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:21.626496   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:21.692455   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:21.692491   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:21.712600   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:21.712632   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:21.855914   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:21.855948   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:21.909035   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:21.909068   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:21.961286   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:21.961317   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:22.002150   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:22.002177   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:22.035129   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:22.035156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:24.592419   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:24.592455   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.592460   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.592464   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.592469   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.592473   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.592477   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.592483   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.592489   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.592494   47297 system_pods.go:74] duration metric: took 4.090707422s to wait for pod list to return data ...
	I0907 00:56:24.592501   47297 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:24.596106   47297 default_sa.go:45] found service account: "default"
	I0907 00:56:24.596127   47297 default_sa.go:55] duration metric: took 3.621408ms for default service account to be created ...
	I0907 00:56:24.596134   47297 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:24.601998   47297 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:24.602021   47297 system_pods.go:89] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.602026   47297 system_pods.go:89] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.602032   47297 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.602037   47297 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.602041   47297 system_pods.go:89] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.602046   47297 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.602054   47297 system_pods.go:89] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.602063   47297 system_pods.go:89] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.602069   47297 system_pods.go:126] duration metric: took 5.931212ms to wait for k8s-apps to be running ...
	I0907 00:56:24.602076   47297 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:24.602116   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:24.623704   47297 system_svc.go:56] duration metric: took 21.617229ms WaitForService to wait for kubelet.
	I0907 00:56:24.623734   47297 kubeadm.go:581] duration metric: took 4m22.868513281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:24.623754   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:24.628408   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:24.628435   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:24.628444   47297 node_conditions.go:105] duration metric: took 4.686272ms to run NodePressure ...
	I0907 00:56:24.628454   47297 start.go:228] waiting for startup goroutines ...
	I0907 00:56:24.628460   47297 start.go:233] waiting for cluster config update ...
	I0907 00:56:24.628469   47297 start.go:242] writing updated cluster config ...
	I0907 00:56:24.628735   47297 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:24.683237   47297 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:24.686336   47297 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-773466" cluster and "default" namespace by default
	I0907 00:56:26.977381   46768 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503998 seconds
	I0907 00:56:26.977624   46768 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:56:27.000116   46768 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:56:27.541598   46768 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:56:27.541809   46768 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-321164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:56:28.055045   46768 kubeadm.go:322] [bootstrap-token] Using token: 7x1950.9u417zcplp1q0xai
	I0907 00:56:24.247241   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:26.773163   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:28.056582   46768 out.go:204]   - Configuring RBAC rules ...
	I0907 00:56:28.056725   46768 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:56:28.065256   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:56:28.075804   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:56:28.081996   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:56:28.090825   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:56:28.097257   46768 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:56:28.114787   46768 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:56:28.337001   46768 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:56:28.476411   46768 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:56:28.479682   46768 kubeadm.go:322] 
	I0907 00:56:28.479784   46768 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:56:28.479799   46768 kubeadm.go:322] 
	I0907 00:56:28.479898   46768 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:56:28.479912   46768 kubeadm.go:322] 
	I0907 00:56:28.479943   46768 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:56:28.480046   46768 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:56:28.480143   46768 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:56:28.480163   46768 kubeadm.go:322] 
	I0907 00:56:28.480343   46768 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:56:28.480361   46768 kubeadm.go:322] 
	I0907 00:56:28.480431   46768 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:56:28.480450   46768 kubeadm.go:322] 
	I0907 00:56:28.480544   46768 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:56:28.480656   46768 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:56:28.480783   46768 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:56:28.480796   46768 kubeadm.go:322] 
	I0907 00:56:28.480924   46768 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:56:28.481024   46768 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:56:28.481034   46768 kubeadm.go:322] 
	I0907 00:56:28.481117   46768 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481203   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:56:28.481223   46768 kubeadm.go:322] 	--control-plane 
	I0907 00:56:28.481226   46768 kubeadm.go:322] 
	I0907 00:56:28.481346   46768 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:56:28.481355   46768 kubeadm.go:322] 
	I0907 00:56:28.481453   46768 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481572   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:56:28.482216   46768 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:56:28.482238   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:56:28.482248   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:56:28.484094   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:56:28.485597   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:56:28.537400   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:56:28.577654   46768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:56:28.577734   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.577747   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=no-preload-321164 minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.909178   46768 ops.go:34] apiserver oom_adj: -16
	I0907 00:56:28.920821   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.027812   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.627489   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:30.127554   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.246606   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:31.746291   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:30.627315   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.127759   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.627183   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.127488   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.627464   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.126850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.626901   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.126917   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.626850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:35.127788   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.747054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.747536   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.627454   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.126916   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.626926   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.126845   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.627579   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.126885   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.627849   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.127371   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.627929   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.127775   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.627392   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.760535   46768 kubeadm.go:1081] duration metric: took 12.182860946s to wait for elevateKubeSystemPrivileges.
	I0907 00:56:40.760574   46768 kubeadm.go:406] StartCluster complete in 5m29.209699324s
	I0907 00:56:40.760594   46768 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.760690   46768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:56:40.762820   46768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.763132   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:56:40.763152   46768 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:56:40.763245   46768 addons.go:69] Setting storage-provisioner=true in profile "no-preload-321164"
	I0907 00:56:40.763251   46768 addons.go:69] Setting default-storageclass=true in profile "no-preload-321164"
	I0907 00:56:40.763263   46768 addons.go:231] Setting addon storage-provisioner=true in "no-preload-321164"
	W0907 00:56:40.763271   46768 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:56:40.763272   46768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-321164"
	I0907 00:56:40.763314   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763357   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:56:40.763404   46768 addons.go:69] Setting metrics-server=true in profile "no-preload-321164"
	I0907 00:56:40.763421   46768 addons.go:231] Setting addon metrics-server=true in "no-preload-321164"
	W0907 00:56:40.763428   46768 addons.go:240] addon metrics-server should already be in state true
	I0907 00:56:40.763464   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763718   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763747   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763772   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763793   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763811   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763833   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.781727   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0907 00:56:40.781738   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0907 00:56:40.781741   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0907 00:56:40.782188   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782279   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782332   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782702   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782724   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782856   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782873   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782879   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782894   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.783096   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783306   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783354   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783531   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.783686   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783717   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.783905   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783949   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.801244   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0907 00:56:40.801534   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0907 00:56:40.801961   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802064   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802509   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802529   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802673   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802689   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802942   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803153   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.803218   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803365   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.804775   46768 addons.go:231] Setting addon default-storageclass=true in "no-preload-321164"
	W0907 00:56:40.804798   46768 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:56:40.804828   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.805191   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.805490   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.807809   46768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:56:40.806890   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.809154   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.809188   46768 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:40.809199   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:56:40.809215   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809249   46768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:56:40.810543   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:56:40.810557   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:56:40.810570   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809485   46768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-321164" context rescaled to 1 replicas
	I0907 00:56:40.810637   46768 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:56:40.813528   46768 out.go:177] * Verifying Kubernetes components...
	I0907 00:56:38.246743   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.747015   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.814976   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:40.817948   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818029   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818080   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818100   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818117   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818137   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818156   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818175   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818282   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818348   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818462   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.818676   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.827224   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0907 00:56:40.827578   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.828106   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.828122   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.828464   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.829012   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.829043   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.843423   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0907 00:56:40.843768   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.844218   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.844236   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.844529   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.844735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.846265   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.846489   46768 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:40.846506   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:56:40.846525   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.849325   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849666   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.849704   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849897   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.850103   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.850251   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.850397   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.965966   46768 node_ready.go:35] waiting up to 6m0s for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.966030   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:56:40.997127   46768 node_ready.go:49] node "no-preload-321164" has status "Ready":"True"
	I0907 00:56:40.997149   46768 node_ready.go:38] duration metric: took 31.151467ms waiting for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.997158   46768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:41.010753   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:41.011536   46768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:41.022410   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:56:41.022431   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:56:41.051599   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:41.119566   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:56:41.119594   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:56:41.228422   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:41.228443   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:56:41.321420   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:42.776406   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810334575s)
	I0907 00:56:42.776435   46768 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 00:56:43.385184   46768 pod_ready.go:102] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:43.446190   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435398332s)
	I0907 00:56:43.446240   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.446248   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3946112s)
	I0907 00:56:43.446255   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449355   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449362   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449377   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.449389   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.449406   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449732   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449771   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449787   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450189   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450216   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.450653   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.450672   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450682   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450691   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451532   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.451597   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451619   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451635   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.451648   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451869   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451885   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451895   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689511   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.368045812s)
	I0907 00:56:43.689565   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.689579   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.689952   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.689963   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689974   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.689991   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.690001   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.690291   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.690307   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.690309   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.690322   46768 addons.go:467] Verifying addon metrics-server=true in "no-preload-321164"
	I0907 00:56:43.693105   46768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:56:43.694562   46768 addons.go:502] enable addons completed in 2.931409197s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:56:45.310723   46768 pod_ready.go:92] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.310742   46768 pod_ready.go:81] duration metric: took 4.299181671s waiting for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.310753   46768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316350   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.316373   46768 pod_ready.go:81] duration metric: took 5.614264ms waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316385   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321183   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.321205   46768 pod_ready.go:81] duration metric: took 4.811919ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321216   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326279   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.326297   46768 pod_ready.go:81] duration metric: took 5.0741ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326308   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332665   46768 pod_ready.go:92] pod "kube-proxy-st6n8" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.332687   46768 pod_ready.go:81] duration metric: took 6.372253ms waiting for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332697   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708023   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.708044   46768 pod_ready.go:81] duration metric: took 375.339873ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708051   46768 pod_ready.go:38] duration metric: took 4.710884592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:45.708065   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:45.708106   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:45.725929   46768 api_server.go:72] duration metric: took 4.915250734s to wait for apiserver process to appear ...
	I0907 00:56:45.725950   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:45.725964   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:56:45.731998   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:56:45.733492   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:45.733507   46768 api_server.go:131] duration metric: took 7.552661ms to wait for apiserver health ...
	I0907 00:56:45.733514   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:45.911337   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:45.911374   46768 system_pods.go:61] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:45.911383   46768 system_pods.go:61] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:45.911389   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:45.911397   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:45.911403   46768 system_pods.go:61] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:45.911410   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:45.911421   46768 system_pods.go:61] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:45.911435   46768 system_pods.go:61] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:45.911443   46768 system_pods.go:74] duration metric: took 177.923008ms to wait for pod list to return data ...
	I0907 00:56:45.911455   46768 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:46.107121   46768 default_sa.go:45] found service account: "default"
	I0907 00:56:46.107149   46768 default_sa.go:55] duration metric: took 195.685496ms for default service account to be created ...
	I0907 00:56:46.107159   46768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:46.314551   46768 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:46.314588   46768 system_pods.go:89] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:46.314596   46768 system_pods.go:89] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:46.314603   46768 system_pods.go:89] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:46.314611   46768 system_pods.go:89] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:46.314618   46768 system_pods.go:89] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:46.314624   46768 system_pods.go:89] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:46.314634   46768 system_pods.go:89] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:46.314645   46768 system_pods.go:89] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:46.314653   46768 system_pods.go:126] duration metric: took 207.48874ms to wait for k8s-apps to be running ...
	I0907 00:56:46.314663   46768 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:46.314713   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:46.331286   46768 system_svc.go:56] duration metric: took 16.613382ms WaitForService to wait for kubelet.
	I0907 00:56:46.331316   46768 kubeadm.go:581] duration metric: took 5.520640777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:46.331342   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:46.507374   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:46.507398   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:46.507406   46768 node_conditions.go:105] duration metric: took 176.059527ms to run NodePressure ...
	I0907 00:56:46.507417   46768 start.go:228] waiting for startup goroutines ...
	I0907 00:56:46.507422   46768 start.go:233] waiting for cluster config update ...
	I0907 00:56:46.507433   46768 start.go:242] writing updated cluster config ...
	I0907 00:56:46.507728   46768 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:46.559712   46768 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:46.561693   46768 out.go:177] * Done! kubectl is now configured to use "no-preload-321164" cluster and "default" namespace by default
	I0907 00:56:43.245531   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:45.746168   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:48.247228   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:50.746605   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:52.748264   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:55.246186   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:57.746658   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:00.245358   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:02.246373   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:04.746154   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:07.245583   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:09.246215   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:11.247141   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.247249   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.440321   46354 pod_ready.go:81] duration metric: took 4m0.000811237s waiting for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	E0907 00:57:13.440352   46354 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:57:13.440368   46354 pod_ready.go:38] duration metric: took 4m1.198343499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:13.440395   46354 kubeadm.go:640] restartCluster took 5m7.071390852s
	W0907 00:57:13.440463   46354 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:57:13.440538   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:57:26.505313   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.064737983s)
	I0907 00:57:26.505392   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:26.521194   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:57:26.530743   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:57:26.540431   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:57:26.540473   46354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0907 00:57:26.744360   46354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:57:39.131760   46354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0907 00:57:39.131857   46354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:57:39.131964   46354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:57:39.132110   46354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:57:39.132226   46354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:57:39.132360   46354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:57:39.132501   46354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:57:39.132573   46354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0907 00:57:39.132654   46354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:57:39.134121   46354 out.go:204]   - Generating certificates and keys ...
	I0907 00:57:39.134212   46354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:57:39.134313   46354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:57:39.134422   46354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:57:39.134501   46354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:57:39.134605   46354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:57:39.134688   46354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:57:39.134801   46354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:57:39.134902   46354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:57:39.135010   46354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:57:39.135121   46354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:57:39.135169   46354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:57:39.135241   46354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:57:39.135308   46354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:57:39.135393   46354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:57:39.135512   46354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:57:39.135599   46354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:57:39.135700   46354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:57:39.137273   46354 out.go:204]   - Booting up control plane ...
	I0907 00:57:39.137369   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:57:39.137458   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:57:39.137561   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:57:39.137677   46354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:57:39.137888   46354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:57:39.138013   46354 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503675 seconds
	I0907 00:57:39.138137   46354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:57:39.138249   46354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:57:39.138297   46354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:57:39.138402   46354 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-940806 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0907 00:57:39.138453   46354 kubeadm.go:322] [bootstrap-token] Using token: nfcsq1.o4ef3s2bthacz2l0
	I0907 00:57:39.139754   46354 out.go:204]   - Configuring RBAC rules ...
	I0907 00:57:39.139848   46354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:57:39.139970   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:57:39.140112   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:57:39.140245   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:57:39.140327   46354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:57:39.140393   46354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:57:39.140442   46354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:57:39.140452   46354 kubeadm.go:322] 
	I0907 00:57:39.140525   46354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:57:39.140533   46354 kubeadm.go:322] 
	I0907 00:57:39.140628   46354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:57:39.140635   46354 kubeadm.go:322] 
	I0907 00:57:39.140665   46354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:57:39.140752   46354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:57:39.140822   46354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:57:39.140834   46354 kubeadm.go:322] 
	I0907 00:57:39.140896   46354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:57:39.140960   46354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:57:39.141043   46354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:57:39.141051   46354 kubeadm.go:322] 
	I0907 00:57:39.141159   46354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0907 00:57:39.141262   46354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:57:39.141276   46354 kubeadm.go:322] 
	I0907 00:57:39.141407   46354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141536   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:57:39.141568   46354 kubeadm.go:322]     --control-plane 	  
	I0907 00:57:39.141575   46354 kubeadm.go:322] 
	I0907 00:57:39.141657   46354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:57:39.141665   46354 kubeadm.go:322] 
	I0907 00:57:39.141730   46354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141832   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:57:39.141851   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:57:39.141863   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:57:39.143462   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:57:39.144982   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:57:39.158663   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:57:39.180662   46354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:57:39.180747   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.180749   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=old-k8s-version-940806 minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.208969   46354 ops.go:34] apiserver oom_adj: -16
	I0907 00:57:39.426346   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.545090   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.162127   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.662172   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.162069   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.662164   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.162355   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.662152   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.161862   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.661532   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.162130   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.661948   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.162260   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.662082   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.162345   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.662378   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.162307   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.662556   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.162204   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.661938   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.161608   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.662198   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.162016   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.662392   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.162303   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.662393   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.162510   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.662195   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.162302   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.662427   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.162085   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.662218   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.779895   46354 kubeadm.go:1081] duration metric: took 15.599222217s to wait for elevateKubeSystemPrivileges.
	I0907 00:57:54.779927   46354 kubeadm.go:406] StartCluster complete in 5m48.456500898s
	I0907 00:57:54.779949   46354 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.780038   46354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:57:54.782334   46354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.782624   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:57:54.782772   46354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:57:54.782871   46354 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782890   46354 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782900   46354 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-940806"
	W0907 00:57:54.782908   46354 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:57:54.782918   46354 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-940806"
	W0907 00:57:54.782926   46354 addons.go:240] addon metrics-server should already be in state true
	I0907 00:57:54.782880   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:57:54.782889   46354 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-940806"
	I0907 00:57:54.783049   46354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-940806"
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.783499   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783500   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783528   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783533   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783571   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783599   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.802026   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0907 00:57:54.802487   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803108   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.803131   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0907 00:57:54.803512   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.803674   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803710   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.804184   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.804215   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.804239   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804259   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804311   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804327   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804569   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804668   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804832   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.805067   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.805094   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.821660   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0907 00:57:54.822183   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.822694   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.822720   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.823047   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.823247   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.823707   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0907 00:57:54.824135   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.825021   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.825046   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.825082   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.827174   46354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:57:54.825428   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.828768   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:57:54.828787   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:57:54.828808   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.829357   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.831479   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.833553   46354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:57:54.832288   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.832776   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.834996   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.835038   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.835055   46354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:54.835067   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:57:54.835083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.835140   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.835307   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.835410   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.836403   46354 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-940806"
	W0907 00:57:54.836424   46354 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:57:54.836451   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.836822   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.836851   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.838476   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.838920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.838951   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.839218   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.839540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.839719   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.839896   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.854883   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0907 00:57:54.855311   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.855830   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.855858   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.856244   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.856713   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.856737   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.872940   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0907 00:57:54.873442   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.874030   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.874057   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.874433   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.874665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.876568   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.876928   46354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:54.876947   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:57:54.876966   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.879761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.879993   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.880015   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.880248   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.880424   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.880591   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.880694   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.933915   46354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-940806" context rescaled to 1 replicas
	I0907 00:57:54.933965   46354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:57:54.936214   46354 out.go:177] * Verifying Kubernetes components...
	I0907 00:57:54.937844   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:55.011087   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:57:55.011114   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:57:55.020666   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:55.038411   46354 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.038474   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:57:55.066358   46354 node_ready.go:49] node "old-k8s-version-940806" has status "Ready":"True"
	I0907 00:57:55.066382   46354 node_ready.go:38] duration metric: took 27.94281ms waiting for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.066393   46354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:55.076936   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	I0907 00:57:55.118806   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:57:55.118835   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:57:55.145653   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:55.158613   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:55.158636   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:57:55.214719   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:56.905329   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.884630053s)
	I0907 00:57:56.905379   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905377   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866875113s)
	I0907 00:57:56.905392   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905403   46354 start.go:901] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0907 00:57:56.905417   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759735751s)
	I0907 00:57:56.905441   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905455   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905794   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905842   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905858   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.905878   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.905895   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905910   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905963   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906013   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906037   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906047   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906286   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906340   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906293   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906325   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906436   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906449   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906459   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906630   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906729   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906732   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906749   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.087889   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.873113752s)
	I0907 00:57:57.087946   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.087979   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.088366   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:57.089849   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.089880   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.089892   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.089899   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.090126   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.090146   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.090155   46354 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-940806"
	I0907 00:57:57.093060   46354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:57:57.094326   46354 addons.go:502] enable addons completed in 2.311555161s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:57:57.115594   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:59.609005   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:58:00.605260   46354 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605285   46354 pod_ready.go:81] duration metric: took 5.528319392s waiting for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	E0907 00:58:00.605296   46354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605305   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.623994   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.624020   46354 pod_ready.go:81] duration metric: took 2.01870868s waiting for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.624039   46354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629264   46354 pod_ready.go:92] pod "kube-proxy-bt454" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.629282   46354 pod_ready.go:81] duration metric: took 5.236562ms waiting for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629288   46354 pod_ready.go:38] duration metric: took 7.562884581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:58:02.629301   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:58:02.629339   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:58:02.644494   46354 api_server.go:72] duration metric: took 7.710498225s to wait for apiserver process to appear ...
	I0907 00:58:02.644515   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:58:02.644529   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:58:02.651352   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:58:02.652147   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:58:02.652186   46354 api_server.go:131] duration metric: took 7.646808ms to wait for apiserver health ...
	I0907 00:58:02.652199   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:58:02.656482   46354 system_pods.go:59] 4 kube-system pods found
	I0907 00:58:02.656506   46354 system_pods.go:61] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.656513   46354 system_pods.go:61] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.656524   46354 system_pods.go:61] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.656534   46354 system_pods.go:61] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.656541   46354 system_pods.go:74] duration metric: took 4.333279ms to wait for pod list to return data ...
	I0907 00:58:02.656553   46354 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:58:02.659079   46354 default_sa.go:45] found service account: "default"
	I0907 00:58:02.659102   46354 default_sa.go:55] duration metric: took 2.543265ms for default service account to be created ...
	I0907 00:58:02.659110   46354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:58:02.663028   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.663050   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.663058   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.663069   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.663077   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.663094   46354 retry.go:31] will retry after 205.506153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:02.874261   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.874291   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.874299   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.874309   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.874318   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.874335   46354 retry.go:31] will retry after 265.617543ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.145704   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.145736   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.145745   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.145755   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.145764   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.145782   46354 retry.go:31] will retry after 459.115577ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.610425   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.610458   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.610466   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.610474   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.610482   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.610498   46354 retry.go:31] will retry after 411.97961ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.026961   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.026992   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.026997   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.027004   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.027011   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.027024   46354 retry.go:31] will retry after 633.680519ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.665840   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.665868   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.665877   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.665889   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.665899   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.665916   46354 retry.go:31] will retry after 680.962565ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:05.352621   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:05.352644   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:05.352652   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:05.352699   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:05.352710   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:05.352725   46354 retry.go:31] will retry after 939.996523ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:06.298740   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:06.298765   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:06.298770   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:06.298791   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:06.298803   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:06.298820   46354 retry.go:31] will retry after 1.103299964s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:07.407728   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:07.407753   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:07.407758   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:07.407766   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:07.407772   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:07.407785   46354 retry.go:31] will retry after 1.13694803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:08.550198   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:08.550228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:08.550236   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:08.550245   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:08.550252   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:08.550269   46354 retry.go:31] will retry after 2.240430665s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:10.796203   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:10.796228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:10.796233   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:10.796240   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:10.796246   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:10.796261   46354 retry.go:31] will retry after 2.183105097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:12.985467   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:12.985491   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:12.985500   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:12.985510   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:12.985518   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:12.985535   46354 retry.go:31] will retry after 2.428546683s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:15.419138   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:15.419163   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:15.419168   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:15.419174   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:15.419181   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:15.419195   46354 retry.go:31] will retry after 2.778392129s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:18.202590   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:18.202621   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:18.202629   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:18.202639   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:18.202648   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:18.202670   46354 retry.go:31] will retry after 5.204092587s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:23.412120   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:23.412144   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:23.412157   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:23.412164   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:23.412171   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:23.412187   46354 retry.go:31] will retry after 6.095121382s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:29.513424   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:29.513449   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:29.513454   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:29.513462   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:29.513468   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:29.513482   46354 retry.go:31] will retry after 6.142679131s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:35.662341   46354 system_pods.go:86] 5 kube-system pods found
	I0907 00:58:35.662367   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:35.662372   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:35.662377   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Pending
	I0907 00:58:35.662383   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:35.662390   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:35.662408   46354 retry.go:31] will retry after 10.800349656s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:46.468817   46354 system_pods.go:86] 6 kube-system pods found
	I0907 00:58:46.468845   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:46.468854   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:46.468859   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:46.468867   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:46.468876   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:46.468884   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:46.468901   46354 retry.go:31] will retry after 10.570531489s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:58:57.047784   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:58:57.047865   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:57.047892   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:57.048256   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Pending
	I0907 00:58:57.048272   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Pending
	I0907 00:58:57.048279   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:57.048286   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:57.048301   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:57.048315   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:57.048345   46354 retry.go:31] will retry after 14.06926028s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:59:11.124216   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:59:11.124242   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:59:11.124248   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:59:11.124252   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Running
	I0907 00:59:11.124257   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Running
	I0907 00:59:11.124261   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:59:11.124265   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:59:11.124272   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:59:11.124276   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:59:11.124283   46354 system_pods.go:126] duration metric: took 1m8.465167722s to wait for k8s-apps to be running ...
	I0907 00:59:11.124289   46354 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:59:11.124328   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:59:11.140651   46354 system_svc.go:56] duration metric: took 16.348641ms WaitForService to wait for kubelet.
	I0907 00:59:11.140686   46354 kubeadm.go:581] duration metric: took 1m16.206690472s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:59:11.140714   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:59:11.144185   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:59:11.144212   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:59:11.144224   46354 node_conditions.go:105] duration metric: took 3.50462ms to run NodePressure ...
	I0907 00:59:11.144235   46354 start.go:228] waiting for startup goroutines ...
	I0907 00:59:11.144244   46354 start.go:233] waiting for cluster config update ...
	I0907 00:59:11.144259   46354 start.go:242] writing updated cluster config ...
	I0907 00:59:11.144547   46354 ssh_runner.go:195] Run: rm -f paused
	I0907 00:59:11.194224   46354 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0907 00:59:11.196420   46354 out.go:177] 
	W0907 00:59:11.197939   46354 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0907 00:59:11.199287   46354 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0907 00:59:11.200770   46354 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-940806" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:24 UTC, ends at Thu 2023-09-07 01:05:26 UTC. --
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.334510691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce70badf-9795-4b15-a622-23a91146ee45 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.334592401Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5fd80493-eaa4-4576-b185-e4544930616c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047927273562709,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307202608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wdnpc,Uid:98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169404
7926973383965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307210538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7343017645a3b3f79206b5070b251a826e57b55aa3282563e8b652bacadd391b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-2w2m6,Uid:70d0ed87-ab6c-4f43-b12d-4730244d67db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047924928896593,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-2w2m6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70d0ed87-ab6c-4f43-b12d-4730244d67db,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07
T00:51:59.307225811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&PodSandboxMetadata{Name:kube-proxy-5bh7n,Uid:28b4df63-f3db-4544-ab5d-54a021be48bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919716687702,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b4df63-f3db-4544-ab5d-54a021be48bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307222561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54e9c6d3-3c07-4afe-94cd-e57f83ba3152,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919681346109,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-07T00:51:59.307214804Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-773466,Uid:4cac465f33f5c79f9d0221b16fad139b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912839147827,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cac465f33f5c79f9d0221b16fad139b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.96:2379,kubernetes.io/config.hash: 4cac465f33f5c79f9d0221b16fad139b,kubernetes.io/config.seen: 2023-09-07T00:51:52.297830059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&PodSandboxMetadata{Name:k
ube-apiserver-default-k8s-diff-port-773466,Uid:9c667ef6664b0c4031e2445ab302b1ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912832606931,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c667ef6664b0c4031e2445ab302b1ac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.96:8444,kubernetes.io/config.hash: 9c667ef6664b0c4031e2445ab302b1ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297831004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-773466,Uid:2ff67be2492143e50f19261845f2b3bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912810649496,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ff67be2492143e50f19261845f2b3bf,kubernetes.io/config.seen: 2023-09-07T00:51:52.297824881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-773466,Uid:5dbc3cb98b05a56f58e47c0d93f0d7ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912797493209,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbc3cb98b05a56f58e47c0d93f0d7ac,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 5dbc3cb98b05a56f58e47c0d93f0d7ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297828966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=492fe478-bd4c-40b6-b9c2-d6471b1d93ec name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.335744460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39c61a77-3b90-4aa2-b301-0dc3af31447e name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.335805216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39c61a77-3b90-4aa2-b301-0dc3af31447e name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.336140391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
28b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39c61a77-3b90-4aa2-b301-0dc3af31447e name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.374497278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d0af715-58d1-4c14-b8c9-2d2df1a9ce52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.374560434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d0af715-58d1-4c14-b8c9-2d2df1a9ce52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.374753102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d0af715-58d1-4c14-b8c9-2d2df1a9ce52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.397341834Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=bda5517e-0ce8-4d30-987f-2252a1929613 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.397621323Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5fd80493-eaa4-4576-b185-e4544930616c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047927273562709,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307202608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wdnpc,Uid:98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169404
7926973383965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307210538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7343017645a3b3f79206b5070b251a826e57b55aa3282563e8b652bacadd391b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-2w2m6,Uid:70d0ed87-ab6c-4f43-b12d-4730244d67db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047924928896593,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-2w2m6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70d0ed87-ab6c-4f43-b12d-4730244d67db,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07
T00:51:59.307225811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&PodSandboxMetadata{Name:kube-proxy-5bh7n,Uid:28b4df63-f3db-4544-ab5d-54a021be48bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919716687702,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b4df63-f3db-4544-ab5d-54a021be48bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307222561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54e9c6d3-3c07-4afe-94cd-e57f83ba3152,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919681346109,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-07T00:51:59.307214804Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-773466,Uid:4cac465f33f5c79f9d0221b16fad139b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912839147827,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cac465f33f5c79f9d0221b16fad139b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.96:2379,kubernetes.io/config.hash: 4cac465f33f5c79f9d0221b16fad139b,kubernetes.io/config.seen: 2023-09-07T00:51:52.297830059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&PodSandboxMetadata{Name:k
ube-apiserver-default-k8s-diff-port-773466,Uid:9c667ef6664b0c4031e2445ab302b1ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912832606931,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c667ef6664b0c4031e2445ab302b1ac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.96:8444,kubernetes.io/config.hash: 9c667ef6664b0c4031e2445ab302b1ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297831004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-773466,Uid:2ff67be2492143e50f19261845f2b3bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912810649496,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ff67be2492143e50f19261845f2b3bf,kubernetes.io/config.seen: 2023-09-07T00:51:52.297824881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-773466,Uid:5dbc3cb98b05a56f58e47c0d93f0d7ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912797493209,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbc3cb98b05a56f58e47c0d93f0d7ac,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 5dbc3cb98b05a56f58e47c0d93f0d7ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297828966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=bda5517e-0ce8-4d30-987f-2252a1929613 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.398496254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d18f4b2b-e503-41f7-b926-85f35805441b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.398576999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d18f4b2b-e503-41f7-b926-85f35805441b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.398778434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d18f4b2b-e503-41f7-b926-85f35805441b name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.414063687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=930a8d3c-8f6e-4253-a5c3-3579464bbebd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.414166455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=930a8d3c-8f6e-4253-a5c3-3579464bbebd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.414409949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=930a8d3c-8f6e-4253-a5c3-3579464bbebd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.450364045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f2303cb-4445-4906-b1b0-6c8a1f3530f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.450479545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f2303cb-4445-4906-b1b0-6c8a1f3530f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.450790856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f2303cb-4445-4906-b1b0-6c8a1f3530f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.489897722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ae12f972-0f2f-4053-816e-3719e2a0f76b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.490073380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ae12f972-0f2f-4053-816e-3719e2a0f76b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.490307253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ae12f972-0f2f-4053-816e-3719e2a0f76b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.529734177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6011e669-f360-4e2e-8a7b-237617ed9cde name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.529858424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6011e669-f360-4e2e-8a7b-237617ed9cde name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:26 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:05:26.530311754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6011e669-f360-4e2e-8a7b-237617ed9cde name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a7c3d8a195ffd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   d0699c8de3106
	a99d9f4d79e52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   bbdf1a69d21dc
	d28e9dadd44da       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   47d994feeba10
	cdcb5afe48490       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   d0699c8de3106
	0672903c9cfb1       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   f2f0fa2c21a79
	a0f6bff336882       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   2fcd735eea535
	0692c75701ac7       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   e2d5bd5f133d4
	e985c2c9d202b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   636d63364a128
	891a5075955e0       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   eb837fe5c83c4
	
	* 
	* ==> coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50593 - 24799 "HINFO IN 8877089458389055375.4368464280314516910. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011942331s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-773466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-773466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=default-k8s-diff-port-773466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_45_29_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:45:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-773466
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:05:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:02:41 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:02:41 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:02:41 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:02:41 +0000   Thu, 07 Sep 2023 00:52:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    default-k8s-diff-port-773466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5a5a6de89e84c62bfe1fc623205e445
	  System UUID:                e5a5a6de-89e8-4c62-bfe1-fc623205e445
	  Boot ID:                    0b04f7f7-709b-4666-97bc-70f056534b6c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-wdnpc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-default-k8s-diff-port-773466                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-773466             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-773466    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-5bh7n                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-773466             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-2w2m6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-773466 event: Registered Node default-k8s-diff-port-773466 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-773466 event: Registered Node default-k8s-diff-port-773466 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.088210] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.450528] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.711618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.595629] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.139596] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.212928] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.118650] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[  +0.277506] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[ +18.097503] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[Sep 7 00:52] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] <==
	* {"level":"info","ts":"2023-09-07T00:51:55.470102Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f38f0aa72455c2b8","local-member-id":"d4b4d4eeb3ae7df8","added-peer-id":"d4b4d4eeb3ae7df8","added-peer-peer-urls":["https://192.168.39.96:2380"]}
	{"level":"info","ts":"2023-09-07T00:51:55.470209Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f38f0aa72455c2b8","local-member-id":"d4b4d4eeb3ae7df8","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:51:55.470273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:51:55.473571Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-07T00:51:55.473758Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d4b4d4eeb3ae7df8","initial-advertise-peer-urls":["https://192.168.39.96:2380"],"listen-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-07T00:51:55.473784Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-07T00:51:55.473879Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2023-09-07T00:51:55.473885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2023-09-07T00:51:56.836599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-07T00:51:56.836715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:51:56.83675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 2"}
	{"level":"info","ts":"2023-09-07T00:51:56.83678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 3"}
	{"level":"info","ts":"2023-09-07T00:51:56.836823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2023-09-07T00:51:56.83685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 3"}
	{"level":"info","ts":"2023-09-07T00:51:56.836876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2023-09-07T00:51:56.838782Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:default-k8s-diff-port-773466 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:51:56.838885Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:51:56.840084Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:51:56.840137Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T00:51:56.839035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:51:56.840685Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:51:56.841434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"info","ts":"2023-09-07T01:01:56.866854Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":826}
	{"level":"info","ts":"2023-09-07T01:01:56.870394Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":826,"took":"3.123539ms","hash":3809355329}
	{"level":"info","ts":"2023-09-07T01:01:56.870458Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3809355329,"revision":826,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  01:05:26 up 14 min,  0 users,  load average: 0.11, 0.24, 0.16
	Linux default-k8s-diff-port-773466 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:01:59.582469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:01:59.582593       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:01:59.582710       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:01:59.584038       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:02:58.420505       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:02:58.420543       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:02:59.583502       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:59.583740       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:02:59.583832       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:02:59.584816       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:59.585001       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:02:59.585039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:03:58.420569       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:03:58.420801       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0907 01:04:58.420440       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:04:58.420475       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:04:59.584526       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:59.584617       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:04:59.584624       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:04:59.586834       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:59.587004       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:04:59.587088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] <==
	* I0907 00:59:42.000049       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:00:11.628028       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:00:12.008172       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:00:41.634852       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:00:42.018024       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:11.645079       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:12.034009       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:41.652539       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:42.044267       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:02:11.658426       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:12.053754       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:02:41.663753       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:42.065872       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:03:03.352791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="347.594µs"
	E0907 01:03:11.671120       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:12.076614       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:03:15.350337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="148.681µs"
	E0907 01:03:41.680139       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:42.085880       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:11.686380       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:12.097239       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:41.693035       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:42.114889       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:05:11.700394       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:05:12.123498       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] <==
	* I0907 00:52:01.001807       1 server_others.go:69] "Using iptables proxy"
	I0907 00:52:01.021002       1 node.go:141] Successfully retrieved node IP: 192.168.39.96
	I0907 00:52:01.119184       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:52:01.119232       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:52:01.122060       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:52:01.122128       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:52:01.122276       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:52:01.122322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:52:01.125198       1 config.go:315] "Starting node config controller"
	I0907 00:52:01.125249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:52:01.132725       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:52:01.134105       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:52:01.133896       1 config.go:188] "Starting service config controller"
	I0907 00:52:01.134334       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:52:01.226255       1 shared_informer.go:318] Caches are synced for node config
	I0907 00:52:01.234859       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:52:01.235138       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] <==
	* I0907 00:51:55.936413       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:51:58.483035       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:51:58.483192       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:51:58.483225       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:51:58.483341       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:51:58.574183       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:51:58.574292       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:58.581406       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:51:58.581525       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:51:58.588357       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:51:58.581546       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:51:58.690010       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:24 UTC, ends at Thu 2023-09-07 01:05:27 UTC. --
	Sep 07 01:02:49 default-k8s-diff-port-773466 kubelet[942]: E0907 01:02:49.360316     942 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rlt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-2w2m6_kube-system(70d0ed87-ab6c-4f43-b12d-4730244d67db): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:02:49 default-k8s-diff-port-773466 kubelet[942]: E0907 01:02:49.360363     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:02:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:02:52.352824     942 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:02:52 default-k8s-diff-port-773466 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:02:52 default-k8s-diff-port-773466 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:02:52 default-k8s-diff-port-773466 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:03:03 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:03.334152     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:03:15 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:15.334447     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:03:27 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:27.335017     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:03:41 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:41.334044     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:03:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:52.350562     942 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:03:52 default-k8s-diff-port-773466 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:03:52 default-k8s-diff-port-773466 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:03:52 default-k8s-diff-port-773466 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:03:55 default-k8s-diff-port-773466 kubelet[942]: E0907 01:03:55.335184     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:04:10 default-k8s-diff-port-773466 kubelet[942]: E0907 01:04:10.340507     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:04:23 default-k8s-diff-port-773466 kubelet[942]: E0907 01:04:23.334239     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:04:38 default-k8s-diff-port-773466 kubelet[942]: E0907 01:04:38.335414     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:04:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:04:52.337982     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:04:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:04:52.359279     942 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:04:52 default-k8s-diff-port-773466 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:04:52 default-k8s-diff-port-773466 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:04:52 default-k8s-diff-port-773466 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:05:07 default-k8s-diff-port-773466 kubelet[942]: E0907 01:05:07.335131     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:05:22 default-k8s-diff-port-773466 kubelet[942]: E0907 01:05:22.335774     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	
	* 
	* ==> storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] <==
	* I0907 00:52:31.678683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:52:31.694211       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:52:31.694308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:52:49.097407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:52:49.097853       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7!
	I0907 00:52:49.098588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c535dee-bad9-476a-b4c2-f4ef696ff918", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7 became leader
	I0907 00:52:49.198212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7!
	
	* 
	* ==> storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] <==
	* I0907 00:52:01.071548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0907 00:52:31.075651       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-2w2m6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6: exit status 1 (65.807871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-2w2m6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0907 00:57:40.641639   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:59:02.117279   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-321164 -n no-preload-321164
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:05:47.101562808 +0000 UTC m=+5285.823018349
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-321164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-321164 logs -n 25: (1.520505003s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-386196                              | cert-expiration-386196       | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-940806        | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC | 07 Sep 23 00:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:48:30.668905   47297 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:48:30.669040   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669051   47297 out.go:309] Setting ErrFile to fd 2...
	I0907 00:48:30.669055   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669275   47297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:48:30.669849   47297 out.go:303] Setting JSON to false
	I0907 00:48:30.670802   47297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1694042256,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:48:30.670876   47297 start.go:138] virtualization: kvm guest
	I0907 00:48:30.673226   47297 out.go:177] * [default-k8s-diff-port-773466] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:48:30.675018   47297 notify.go:220] Checking for updates...
	I0907 00:48:30.675022   47297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:48:30.676573   47297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:48:30.677899   47297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:48:30.679390   47297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:48:30.680678   47297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:48:30.682324   47297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:48:30.684199   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:48:30.684737   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.684791   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.699093   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0907 00:48:30.699446   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.699961   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.699981   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.700356   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.700531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.700779   47297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:48:30.701065   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.701099   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.715031   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0907 00:48:30.715374   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.715847   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.715866   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.716151   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.716316   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.750129   47297 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:48:30.751568   47297 start.go:298] selected driver: kvm2
	I0907 00:48:30.751584   47297 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.751680   47297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:48:30.752362   47297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.752458   47297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:48:30.765932   47297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:48:30.766254   47297 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:48:30.766285   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:48:30.766297   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:48:30.766312   47297 start_flags.go:321] config:
	{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-77346
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.766449   47297 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.768165   47297 out.go:177] * Starting control plane node default-k8s-diff-port-773466 in cluster default-k8s-diff-port-773466
	I0907 00:48:28.807066   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:30.769579   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:48:30.769605   47297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:48:30.769618   47297 cache.go:57] Caching tarball of preloaded images
	I0907 00:48:30.769690   47297 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:48:30.769700   47297 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:48:30.769802   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:48:30.769965   47297 start.go:365] acquiring machines lock for default-k8s-diff-port-773466: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:48:34.886988   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:37.959093   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:44.039083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:47.111100   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:53.191104   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:56.263090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:02.343026   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:05.415059   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:11.495064   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:14.567091   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:20.647045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:23.719041   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:29.799012   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:32.871070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:38.951073   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:42.023127   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:48.103090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:51.175063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:57.255062   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:00.327063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:06.407045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:09.479083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:15.559056   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:18.631050   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:24.711070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:27.783032   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:30.786864   46768 start.go:369] acquired machines lock for "no-preload-321164" in 3m55.470116528s
	I0907 00:50:30.786911   46768 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:30.786932   46768 fix.go:54] fixHost starting: 
	I0907 00:50:30.787365   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:30.787402   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:30.802096   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0907 00:50:30.802471   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:30.803040   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:50:30.803070   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:30.803390   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:30.803609   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:30.803735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:50:30.805366   46768 fix.go:102] recreateIfNeeded on no-preload-321164: state=Stopped err=<nil>
	I0907 00:50:30.805394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	W0907 00:50:30.805601   46768 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:30.807478   46768 out.go:177] * Restarting existing kvm2 VM for "no-preload-321164" ...
	I0907 00:50:30.784621   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:30.784665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:50:30.786659   46354 machine.go:91] provisioned docker machine in 4m37.428246924s
	I0907 00:50:30.786707   46354 fix.go:56] fixHost completed within 4m37.448613342s
	I0907 00:50:30.786715   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 4m37.448629588s
	W0907 00:50:30.786743   46354 start.go:672] error starting host: provision: host is not running
	W0907 00:50:30.786862   46354 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:50:30.786876   46354 start.go:687] Will try again in 5 seconds ...
	I0907 00:50:30.809015   46768 main.go:141] libmachine: (no-preload-321164) Calling .Start
	I0907 00:50:30.809182   46768 main.go:141] libmachine: (no-preload-321164) Ensuring networks are active...
	I0907 00:50:30.809827   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network default is active
	I0907 00:50:30.810153   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network mk-no-preload-321164 is active
	I0907 00:50:30.810520   46768 main.go:141] libmachine: (no-preload-321164) Getting domain xml...
	I0907 00:50:30.811434   46768 main.go:141] libmachine: (no-preload-321164) Creating domain...
	I0907 00:50:32.024103   46768 main.go:141] libmachine: (no-preload-321164) Waiting to get IP...
	I0907 00:50:32.024955   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.025314   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.025386   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.025302   47622 retry.go:31] will retry after 211.413529ms: waiting for machine to come up
	I0907 00:50:32.238887   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.239424   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.239452   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.239400   47622 retry.go:31] will retry after 306.62834ms: waiting for machine to come up
	I0907 00:50:32.547910   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.548378   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.548409   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.548318   47622 retry.go:31] will retry after 360.126343ms: waiting for machine to come up
	I0907 00:50:32.909809   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.910325   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.910356   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.910259   47622 retry.go:31] will retry after 609.953186ms: waiting for machine to come up
	I0907 00:50:33.522073   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:33.522437   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:33.522467   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:33.522382   47622 retry.go:31] will retry after 526.4152ms: waiting for machine to come up
	I0907 00:50:34.050028   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.050475   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.050503   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.050417   47622 retry.go:31] will retry after 748.311946ms: waiting for machine to come up
	I0907 00:50:34.799933   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.800367   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.800395   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.800321   47622 retry.go:31] will retry after 732.484316ms: waiting for machine to come up
	I0907 00:50:35.788945   46354 start.go:365] acquiring machines lock for old-k8s-version-940806: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:50:35.534154   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:35.534583   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:35.534606   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:35.534535   47622 retry.go:31] will retry after 1.217693919s: waiting for machine to come up
	I0907 00:50:36.754260   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:36.754682   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:36.754711   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:36.754634   47622 retry.go:31] will retry after 1.508287783s: waiting for machine to come up
	I0907 00:50:38.264195   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:38.264607   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:38.264630   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:38.264557   47622 retry.go:31] will retry after 1.481448978s: waiting for machine to come up
	I0907 00:50:39.748383   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:39.748865   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:39.748898   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:39.748803   47622 retry.go:31] will retry after 2.345045055s: waiting for machine to come up
	I0907 00:50:42.095158   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:42.095801   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:42.095832   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:42.095747   47622 retry.go:31] will retry after 3.269083195s: waiting for machine to come up
	I0907 00:50:45.369097   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:45.369534   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:45.369561   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:45.369448   47622 retry.go:31] will retry after 4.462134893s: waiting for machine to come up
	I0907 00:50:49.835862   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836273   46768 main.go:141] libmachine: (no-preload-321164) Found IP for machine: 192.168.61.125
	I0907 00:50:49.836315   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has current primary IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836342   46768 main.go:141] libmachine: (no-preload-321164) Reserving static IP address...
	I0907 00:50:49.836774   46768 main.go:141] libmachine: (no-preload-321164) Reserved static IP address: 192.168.61.125
	I0907 00:50:49.836794   46768 main.go:141] libmachine: (no-preload-321164) Waiting for SSH to be available...
	I0907 00:50:49.836827   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.836860   46768 main.go:141] libmachine: (no-preload-321164) DBG | skip adding static IP to network mk-no-preload-321164 - found existing host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"}
	I0907 00:50:49.836880   46768 main.go:141] libmachine: (no-preload-321164) DBG | Getting to WaitForSSH function...
	I0907 00:50:49.838931   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839299   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.839326   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839464   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH client type: external
	I0907 00:50:49.839500   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa (-rw-------)
	I0907 00:50:49.839538   46768 main.go:141] libmachine: (no-preload-321164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:50:49.839557   46768 main.go:141] libmachine: (no-preload-321164) DBG | About to run SSH command:
	I0907 00:50:49.839568   46768 main.go:141] libmachine: (no-preload-321164) DBG | exit 0
	I0907 00:50:49.930557   46768 main.go:141] libmachine: (no-preload-321164) DBG | SSH cmd err, output: <nil>: 
	I0907 00:50:49.931033   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetConfigRaw
	I0907 00:50:49.931662   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:49.934286   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934719   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.934755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934973   46768 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:50:49.935197   46768 machine.go:88] provisioning docker machine ...
	I0907 00:50:49.935221   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:49.935409   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935567   46768 buildroot.go:166] provisioning hostname "no-preload-321164"
	I0907 00:50:49.935586   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935730   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:49.937619   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.937879   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.937899   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.938049   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:49.938303   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938464   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938624   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:49.938803   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:49.939300   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:49.939315   46768 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-321164 && echo "no-preload-321164" | sudo tee /etc/hostname
	I0907 00:50:50.076488   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-321164
	
	I0907 00:50:50.076513   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.079041   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079362   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.079409   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079614   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.079831   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080013   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080183   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.080361   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.080757   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.080775   46768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-321164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-321164/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-321164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:50:51.203755   46833 start.go:369] acquired machines lock for "embed-certs-546209" in 4m11.274622402s
	I0907 00:50:51.203804   46833 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:51.203823   46833 fix.go:54] fixHost starting: 
	I0907 00:50:51.204233   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:51.204274   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:51.221096   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0907 00:50:51.221487   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:51.222026   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:50:51.222048   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:51.222401   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:51.222595   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:50:51.222757   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:50:51.224388   46833 fix.go:102] recreateIfNeeded on embed-certs-546209: state=Stopped err=<nil>
	I0907 00:50:51.224413   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	W0907 00:50:51.224585   46833 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:51.226812   46833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-546209" ...
	I0907 00:50:50.214796   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:50.215590   46768 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:50:50.215629   46768 buildroot.go:174] setting up certificates
	I0907 00:50:50.215639   46768 provision.go:83] configureAuth start
	I0907 00:50:50.215659   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:50.215952   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:50.218581   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.218947   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.218970   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.219137   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.221833   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222177   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.222221   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222323   46768 provision.go:138] copyHostCerts
	I0907 00:50:50.222377   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:50:50.222390   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:50:50.222497   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:50:50.222628   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:50:50.222646   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:50:50.222682   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:50:50.222765   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:50:50.222784   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:50:50.222817   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:50:50.222880   46768 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.no-preload-321164 san=[192.168.61.125 192.168.61.125 localhost 127.0.0.1 minikube no-preload-321164]
	I0907 00:50:50.456122   46768 provision.go:172] copyRemoteCerts
	I0907 00:50:50.456175   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:50:50.456198   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.458665   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459030   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.459053   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459237   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.459468   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.459630   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.459766   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:50.549146   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:50:50.572002   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 00:50:50.595576   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:50:50.618054   46768 provision.go:86] duration metric: configureAuth took 402.401011ms
	I0907 00:50:50.618086   46768 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:50:50.618327   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:50:50.618410   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.620908   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621255   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.621289   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621432   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.621619   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621752   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621879   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.622006   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.622586   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.622611   46768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:50:50.946938   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:50:50.946964   46768 machine.go:91] provisioned docker machine in 1.011750962s
	I0907 00:50:50.946975   46768 start.go:300] post-start starting for "no-preload-321164" (driver="kvm2")
	I0907 00:50:50.946989   46768 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:50:50.947015   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:50.947339   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:50:50.947367   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.950370   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950754   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.950798   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.951171   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.951331   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.951472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.040440   46768 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:50:51.044700   46768 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:50:51.044728   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:50:51.044816   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:50:51.044899   46768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:50:51.045018   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:50:51.053507   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:50:51.077125   46768 start.go:303] post-start completed in 130.134337ms
	I0907 00:50:51.077149   46768 fix.go:56] fixHost completed within 20.29021748s
	I0907 00:50:51.077174   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.079928   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080266   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.080297   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080516   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.080744   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.080909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.081080   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.081255   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:51.081837   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:51.081853   46768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:50:51.203596   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047851.182131777
	
	I0907 00:50:51.203636   46768 fix.go:206] guest clock: 1694047851.182131777
	I0907 00:50:51.203646   46768 fix.go:219] Guest: 2023-09-07 00:50:51.182131777 +0000 UTC Remote: 2023-09-07 00:50:51.077154021 +0000 UTC m=+255.896364351 (delta=104.977756ms)
	I0907 00:50:51.203664   46768 fix.go:190] guest clock delta is within tolerance: 104.977756ms
	I0907 00:50:51.203668   46768 start.go:83] releasing machines lock for "no-preload-321164", held for 20.416782491s
	I0907 00:50:51.203696   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.203977   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:51.207262   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207708   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.207755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207926   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208563   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208644   46768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:50:51.208692   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.208755   46768 ssh_runner.go:195] Run: cat /version.json
	I0907 00:50:51.208777   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.211412   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211453   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211863   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211901   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211931   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211957   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.212132   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212318   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212406   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212477   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212612   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.212722   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212875   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.300796   46768 ssh_runner.go:195] Run: systemctl --version
	I0907 00:50:51.324903   46768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:50:51.465767   46768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:50:51.471951   46768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:50:51.472036   46768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:50:51.488733   46768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:50:51.488761   46768 start.go:466] detecting cgroup driver to use...
	I0907 00:50:51.488831   46768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:50:51.501772   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:50:51.516019   46768 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:50:51.516083   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:50:51.530425   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:50:51.546243   46768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:50:51.649058   46768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:50:51.768622   46768 docker.go:212] disabling docker service ...
	I0907 00:50:51.768705   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:50:51.785225   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:50:51.797018   46768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:50:51.908179   46768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:50:52.021212   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:50:52.037034   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:50:52.055163   46768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:50:52.055218   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.065451   46768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:50:52.065520   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.076202   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.086865   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.096978   46768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:50:52.107492   46768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:50:52.117036   46768 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:50:52.117104   46768 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:50:52.130309   46768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:50:52.140016   46768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:50:52.249901   46768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:50:52.422851   46768 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:50:52.422928   46768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:50:52.427852   46768 start.go:534] Will wait 60s for crictl version
	I0907 00:50:52.427903   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.431904   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:50:52.472552   46768 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:50:52.472632   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.526514   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.580133   46768 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:50:51.228316   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Start
	I0907 00:50:51.228549   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring networks are active...
	I0907 00:50:51.229311   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network default is active
	I0907 00:50:51.229587   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network mk-embed-certs-546209 is active
	I0907 00:50:51.230001   46833 main.go:141] libmachine: (embed-certs-546209) Getting domain xml...
	I0907 00:50:51.230861   46833 main.go:141] libmachine: (embed-certs-546209) Creating domain...
	I0907 00:50:52.512329   46833 main.go:141] libmachine: (embed-certs-546209) Waiting to get IP...
	I0907 00:50:52.513160   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.513607   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.513709   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.513575   47738 retry.go:31] will retry after 266.575501ms: waiting for machine to come up
	I0907 00:50:52.782236   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.782674   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.782699   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.782623   47738 retry.go:31] will retry after 258.252832ms: waiting for machine to come up
	I0907 00:50:53.042276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.042851   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.042886   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.042799   47738 retry.go:31] will retry after 480.751908ms: waiting for machine to come up
	I0907 00:50:53.525651   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.526280   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.526314   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.526222   47738 retry.go:31] will retry after 592.373194ms: waiting for machine to come up
	I0907 00:50:54.119935   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.120401   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.120440   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.120320   47738 retry.go:31] will retry after 602.269782ms: waiting for machine to come up
	I0907 00:50:54.723919   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.724403   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.724429   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.724356   47738 retry.go:31] will retry after 631.28427ms: waiting for machine to come up
	I0907 00:50:52.581522   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:52.584587   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.584995   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:52.585027   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.585212   46768 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:50:52.589138   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:50:52.602205   46768 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:50:52.602259   46768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:50:52.633785   46768 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:50:52.633808   46768 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:50:52.633868   46768 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.633887   46768 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.633889   46768 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.633929   46768 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0907 00:50:52.633954   46768 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.633849   46768 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.633937   46768 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.634076   46768 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635447   46768 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.635516   46768 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.635529   46768 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.635578   46768 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.635583   46768 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0907 00:50:52.635587   46768 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.868791   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917664   46768 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0907 00:50:52.917705   46768 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917740   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.921520   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.924174   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.924775   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0907 00:50:52.926455   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.927265   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.936511   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.936550   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.989863   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0907 00:50:52.989967   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.081783   46768 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0907 00:50:53.081828   46768 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.081876   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.200951   46768 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0907 00:50:53.200999   46768 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.201037   46768 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0907 00:50:53.201055   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201074   46768 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.201115   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201120   46768 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0907 00:50:53.201138   46768 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.201163   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201196   46768 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0907 00:50:53.201208   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0907 00:50:53.201220   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201222   46768 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:53.201245   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201254   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201257   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.213879   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.213909   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.214030   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.559290   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.356797   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:55.357248   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:55.357276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:55.357208   47738 retry.go:31] will retry after 957.470134ms: waiting for machine to come up
	I0907 00:50:56.316920   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:56.317410   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:56.317437   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:56.317357   47738 retry.go:31] will retry after 929.647798ms: waiting for machine to come up
	I0907 00:50:57.249114   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:57.249599   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:57.249631   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:57.249548   47738 retry.go:31] will retry after 1.218276188s: waiting for machine to come up
	I0907 00:50:58.470046   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:58.470509   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:58.470539   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:58.470461   47738 retry.go:31] will retry after 2.324175972s: waiting for machine to come up
	I0907 00:50:55.219723   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.018454399s)
	I0907 00:50:55.219753   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0907 00:50:55.219835   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0: (2.018563387s)
	I0907 00:50:55.219874   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0907 00:50:55.219897   46768 ssh_runner.go:235] Completed: which crictl: (2.01861063s)
	I0907 00:50:55.219931   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1: (2.006023749s)
	I0907 00:50:55.219956   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:55.219965   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0907 00:50:55.219974   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:55.220018   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.220026   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1: (2.006085999s)
	I0907 00:50:55.220034   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1: (2.005987599s)
	I0907 00:50:55.220056   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0907 00:50:55.220062   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0907 00:50:55.220065   46768 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.660750078s)
	I0907 00:50:55.220091   46768 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0907 00:50:55.220107   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:50:55.220139   46768 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.220178   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:55.220141   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:50:55.263187   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0907 00:50:55.263256   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0907 00:50:55.263276   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263282   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0907 00:50:55.263291   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:50:55.263321   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263334   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0907 00:50:55.263428   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0907 00:50:55.263432   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.275710   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0907 00:50:58.251089   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.987744073s)
	I0907 00:50:58.251119   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0907 00:50:58.251125   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.987662447s)
	I0907 00:50:58.251143   46768 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251164   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0907 00:50:58.251192   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251253   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:50:58.256733   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0907 00:51:00.798145   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:00.798673   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:00.798702   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:00.798607   47738 retry.go:31] will retry after 1.874271621s: waiting for machine to come up
	I0907 00:51:02.674532   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:02.675085   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:02.675117   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:02.675050   47738 retry.go:31] will retry after 2.9595889s: waiting for machine to come up
	I0907 00:51:04.952628   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.701410779s)
	I0907 00:51:04.952741   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0907 00:51:04.952801   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:04.952854   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:05.636309   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:05.636744   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:05.636779   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:05.636694   47738 retry.go:31] will retry after 4.45645523s: waiting for machine to come up
	I0907 00:51:06.100759   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.147880303s)
	I0907 00:51:06.100786   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0907 00:51:06.100803   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:06.100844   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:08.663694   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.56282168s)
	I0907 00:51:08.663725   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0907 00:51:08.663754   46768 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:08.663803   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:10.023202   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.359374479s)
	I0907 00:51:10.023234   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0907 00:51:10.023276   46768 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:10.023349   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:11.739345   47297 start.go:369] acquired machines lock for "default-k8s-diff-port-773466" in 2m40.969329009s
	I0907 00:51:11.739394   47297 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:11.739419   47297 fix.go:54] fixHost starting: 
	I0907 00:51:11.739834   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:11.739870   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:11.755796   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0907 00:51:11.756102   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:11.756564   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:51:11.756588   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:11.756875   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:11.757032   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:11.757185   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:51:11.758750   47297 fix.go:102] recreateIfNeeded on default-k8s-diff-port-773466: state=Stopped err=<nil>
	I0907 00:51:11.758772   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	W0907 00:51:11.758955   47297 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:11.761066   47297 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-773466" ...
	I0907 00:51:10.095825   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096285   46833 main.go:141] libmachine: (embed-certs-546209) Found IP for machine: 192.168.50.242
	I0907 00:51:10.096312   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has current primary IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096321   46833 main.go:141] libmachine: (embed-certs-546209) Reserving static IP address...
	I0907 00:51:10.096706   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.096731   46833 main.go:141] libmachine: (embed-certs-546209) Reserved static IP address: 192.168.50.242
	I0907 00:51:10.096750   46833 main.go:141] libmachine: (embed-certs-546209) DBG | skip adding static IP to network mk-embed-certs-546209 - found existing host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"}
	I0907 00:51:10.096766   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Getting to WaitForSSH function...
	I0907 00:51:10.096777   46833 main.go:141] libmachine: (embed-certs-546209) Waiting for SSH to be available...
	I0907 00:51:10.098896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099227   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.099260   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099360   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH client type: external
	I0907 00:51:10.099382   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa (-rw-------)
	I0907 00:51:10.099412   46833 main.go:141] libmachine: (embed-certs-546209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:10.099428   46833 main.go:141] libmachine: (embed-certs-546209) DBG | About to run SSH command:
	I0907 00:51:10.099444   46833 main.go:141] libmachine: (embed-certs-546209) DBG | exit 0
	I0907 00:51:10.199038   46833 main.go:141] libmachine: (embed-certs-546209) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:10.199377   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetConfigRaw
	I0907 00:51:10.200006   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.202924   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203328   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.203352   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203576   46833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:51:10.203879   46833 machine.go:88] provisioning docker machine ...
	I0907 00:51:10.203908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:10.204125   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204290   46833 buildroot.go:166] provisioning hostname "embed-certs-546209"
	I0907 00:51:10.204312   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204489   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.206898   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207332   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.207365   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207473   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.207643   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207791   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207920   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.208080   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.208476   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.208496   46833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-546209 && echo "embed-certs-546209" | sudo tee /etc/hostname
	I0907 00:51:10.356060   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-546209
	
	I0907 00:51:10.356098   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.359533   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.359867   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.359896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.360097   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.360284   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360435   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360629   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.360820   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.361504   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.361538   46833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-546209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-546209/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-546209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:10.503181   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:10.503211   46833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:10.503238   46833 buildroot.go:174] setting up certificates
	I0907 00:51:10.503246   46833 provision.go:83] configureAuth start
	I0907 00:51:10.503254   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.503555   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.506514   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.506930   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.506955   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.507150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.509772   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510081   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.510111   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510215   46833 provision.go:138] copyHostCerts
	I0907 00:51:10.510281   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:10.510292   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:10.510345   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:10.510438   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:10.510446   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:10.510466   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:10.510552   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:10.510559   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:10.510579   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:10.510638   46833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.embed-certs-546209 san=[192.168.50.242 192.168.50.242 localhost 127.0.0.1 minikube embed-certs-546209]
	I0907 00:51:10.947044   46833 provision.go:172] copyRemoteCerts
	I0907 00:51:10.947101   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:10.947122   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.949879   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950221   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.950251   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.950660   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.950849   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.950993   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.052610   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:11.077082   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0907 00:51:11.100979   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:11.124155   46833 provision.go:86] duration metric: configureAuth took 620.900948ms
	I0907 00:51:11.124176   46833 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:11.124389   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:11.124456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.127163   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127498   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.127536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127813   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.128011   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128201   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128381   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.128560   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.129185   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.129214   46833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:11.467260   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:11.467297   46833 machine.go:91] provisioned docker machine in 1.263400182s
	I0907 00:51:11.467309   46833 start.go:300] post-start starting for "embed-certs-546209" (driver="kvm2")
	I0907 00:51:11.467321   46833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:11.467343   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.467669   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:11.467715   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.470299   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470675   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.470705   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470846   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.471038   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.471191   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.471435   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.568708   46833 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:11.573505   46833 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:11.573533   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:11.573595   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:11.573669   46833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:11.573779   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:11.582612   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.607383   46833 start.go:303] post-start completed in 140.062214ms
	I0907 00:51:11.607400   46833 fix.go:56] fixHost completed within 20.403578781s
	I0907 00:51:11.607419   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.609882   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610233   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.610265   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610411   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.610602   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610792   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610972   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.611161   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.611550   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.611563   46833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:11.739146   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047871.687486971
	
	I0907 00:51:11.739167   46833 fix.go:206] guest clock: 1694047871.687486971
	I0907 00:51:11.739176   46833 fix.go:219] Guest: 2023-09-07 00:51:11.687486971 +0000 UTC Remote: 2023-09-07 00:51:11.607403696 +0000 UTC m=+271.818672785 (delta=80.083275ms)
	I0907 00:51:11.739196   46833 fix.go:190] guest clock delta is within tolerance: 80.083275ms
	I0907 00:51:11.739202   46833 start.go:83] releasing machines lock for "embed-certs-546209", held for 20.535419293s
	I0907 00:51:11.739232   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.739478   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:11.742078   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742446   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.742474   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742676   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743172   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743342   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743422   46833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:11.743470   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.743541   46833 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:11.743573   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.746120   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746484   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.746516   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746640   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.746843   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.746989   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747015   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.747044   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.747169   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.747179   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.747394   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.747556   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747717   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.839831   46833 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:11.861736   46833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:12.006017   46833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:12.011678   46833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:12.011739   46833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:12.026851   46833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:12.026871   46833 start.go:466] detecting cgroup driver to use...
	I0907 00:51:12.026934   46833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:12.040077   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:12.052962   46833 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:12.053018   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:12.066509   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:12.079587   46833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:12.189043   46833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:12.310997   46833 docker.go:212] disabling docker service ...
	I0907 00:51:12.311065   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:12.324734   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:12.336808   46833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:12.461333   46833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:12.584841   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:12.598337   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:12.615660   46833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:12.615736   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.626161   46833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:12.626232   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.637475   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.647631   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.658444   46833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:12.669167   46833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:12.678558   46833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:12.678614   46833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:12.692654   46833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:12.703465   46833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:12.820819   46833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:12.996574   46833 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:12.996650   46833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:13.002744   46833 start.go:534] Will wait 60s for crictl version
	I0907 00:51:13.002818   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:51:13.007287   46833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:13.042173   46833 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:13.042254   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.090562   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.145112   46833 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:13.146767   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:13.149953   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150357   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:13.150388   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150603   46833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:13.154792   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:13.166540   46833 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:13.166607   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:13.203316   46833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:13.203391   46833 ssh_runner.go:195] Run: which lz4
	I0907 00:51:13.207399   46833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:13.211826   46833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:13.211854   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:10.979891   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0907 00:51:10.979935   46768 cache_images.go:123] Successfully loaded all cached images
	I0907 00:51:10.979942   46768 cache_images.go:92] LoadImages completed in 18.346122768s
	I0907 00:51:10.980017   46768 ssh_runner.go:195] Run: crio config
	I0907 00:51:11.044573   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:11.044595   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:11.044612   46768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:11.044630   46768 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-321164 NodeName:no-preload-321164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:11.044749   46768 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-321164"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:11.044807   46768 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-321164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:11.044852   46768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:11.055469   46768 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:11.055527   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:11.063642   46768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0907 00:51:11.081151   46768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:11.098623   46768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0907 00:51:11.116767   46768 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:11.120552   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:11.133845   46768 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164 for IP: 192.168.61.125
	I0907 00:51:11.133876   46768 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:11.134026   46768 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:11.134092   46768 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:11.134173   46768 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.key
	I0907 00:51:11.134216   46768 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key.05d6cdfc
	I0907 00:51:11.134252   46768 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key
	I0907 00:51:11.134393   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:11.134436   46768 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:11.134455   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:11.134488   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:11.134512   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:11.134534   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:11.134576   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.135184   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:11.161212   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:11.185797   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:11.209084   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:11.233001   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:11.255646   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:11.278323   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:11.301913   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:11.324316   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:11.349950   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:11.375738   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:11.402735   46768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:11.421372   46768 ssh_runner.go:195] Run: openssl version
	I0907 00:51:11.426855   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:11.436392   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440778   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.446374   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:11.455773   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:11.465073   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470197   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470243   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.475740   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:11.484993   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:11.494256   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498766   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.504037   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:11.512896   46768 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:11.517289   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:11.523115   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:11.528780   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:11.534330   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:11.539777   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:11.545439   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:11.550878   46768 kubeadm.go:404] StartCluster: {Name:no-preload-321164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:11.550968   46768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:11.551014   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:11.582341   46768 cri.go:89] found id: ""
	I0907 00:51:11.582409   46768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:11.591760   46768 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:11.591782   46768 kubeadm.go:636] restartCluster start
	I0907 00:51:11.591825   46768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:11.600241   46768 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.601258   46768 kubeconfig.go:92] found "no-preload-321164" server: "https://192.168.61.125:8443"
	I0907 00:51:11.603775   46768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:11.612221   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.612268   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.622330   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.622348   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.622392   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.632889   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.133626   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.133726   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.144713   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.633065   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.633145   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.648698   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.133304   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.133401   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.146822   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.633303   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.633374   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.648566   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.132966   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.133041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.147847   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.633090   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.633177   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.648893   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.133388   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.133465   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.149162   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.762623   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Start
	I0907 00:51:11.762823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring networks are active...
	I0907 00:51:11.763580   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network default is active
	I0907 00:51:11.764022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network mk-default-k8s-diff-port-773466 is active
	I0907 00:51:11.764494   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Getting domain xml...
	I0907 00:51:11.765139   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Creating domain...
	I0907 00:51:13.032555   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting to get IP...
	I0907 00:51:13.033441   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.033855   47907 retry.go:31] will retry after 214.721735ms: waiting for machine to come up
	I0907 00:51:13.250549   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251062   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251090   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.251001   47907 retry.go:31] will retry after 260.305773ms: waiting for machine to come up
	I0907 00:51:13.512603   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513144   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513175   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.513088   47907 retry.go:31] will retry after 293.213959ms: waiting for machine to come up
	I0907 00:51:13.807649   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.808128   47907 retry.go:31] will retry after 455.70029ms: waiting for machine to come up
	I0907 00:51:14.265914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266412   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266444   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:14.266367   47907 retry.go:31] will retry after 761.48199ms: waiting for machine to come up
	I0907 00:51:15.029446   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029916   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029950   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.029868   47907 retry.go:31] will retry after 889.947924ms: waiting for machine to come up
	I0907 00:51:15.079606   46833 crio.go:444] Took 1.872243 seconds to copy over tarball
	I0907 00:51:15.079679   46833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:18.068521   46833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988813422s)
	I0907 00:51:18.068547   46833 crio.go:451] Took 2.988919 seconds to extract the tarball
	I0907 00:51:18.068557   46833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:18.109973   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:18.154472   46833 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:18.154493   46833 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:18.154568   46833 ssh_runner.go:195] Run: crio config
	I0907 00:51:18.216517   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:18.216549   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:18.216571   46833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:18.216597   46833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-546209 NodeName:embed-certs-546209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:18.216747   46833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-546209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:18.216815   46833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-546209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:18.216863   46833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:18.230093   46833 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:18.230164   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:18.239087   46833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0907 00:51:18.256683   46833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:18.274030   46833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0907 00:51:18.294711   46833 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:18.299655   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:18.312980   46833 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209 for IP: 192.168.50.242
	I0907 00:51:18.313028   46833 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:18.313215   46833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:18.313283   46833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:18.313382   46833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/client.key
	I0907 00:51:18.313446   46833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key.5dc0f9a1
	I0907 00:51:18.313495   46833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key
	I0907 00:51:18.313607   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:18.313633   46833 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:18.313640   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:18.313665   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:18.313688   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:18.313709   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:18.313747   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:18.314356   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:18.344731   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:18.368872   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:18.397110   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:51:18.424441   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:18.452807   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:18.481018   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:18.509317   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:18.541038   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:18.565984   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:18.590863   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:18.614083   46833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:18.631295   46833 ssh_runner.go:195] Run: openssl version
	I0907 00:51:18.637229   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:18.651999   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.656999   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.657052   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.663109   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:18.675826   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:18.688358   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693281   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693331   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.699223   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:18.711511   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:18.724096   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729285   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729338   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.735410   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:18.747948   46833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:18.753003   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:18.759519   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:18.765813   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:18.772328   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:18.778699   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:18.785207   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:18.791515   46833 kubeadm.go:404] StartCluster: {Name:embed-certs-546209 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:18.791636   46833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:18.791719   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:18.831468   46833 cri.go:89] found id: ""
	I0907 00:51:18.831544   46833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:18.843779   46833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:18.843805   46833 kubeadm.go:636] restartCluster start
	I0907 00:51:18.843863   46833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:18.854604   46833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.855622   46833 kubeconfig.go:92] found "embed-certs-546209" server: "https://192.168.50.242:8443"
	I0907 00:51:18.857679   46833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:18.867583   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.867640   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.879567   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.879587   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.879634   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.891098   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.391839   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.391932   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.405078   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.633045   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.633128   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.644837   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.133842   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.133926   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.148072   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.633750   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.633828   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.648961   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.133669   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.133757   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.148342   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.633967   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.634076   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.649188   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.133815   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.133917   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.148350   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.633962   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.634047   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.649195   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.133733   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.133821   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.145109   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.633727   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.633808   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.645272   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.133921   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.133990   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.145494   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.920914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921395   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921430   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.921325   47907 retry.go:31] will retry after 952.422054ms: waiting for machine to come up
	I0907 00:51:16.875800   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876319   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876356   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:16.876272   47907 retry.go:31] will retry after 1.481584671s: waiting for machine to come up
	I0907 00:51:18.359815   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360308   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:18.360185   47907 retry.go:31] will retry after 1.355619716s: waiting for machine to come up
	I0907 00:51:19.717081   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717458   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717485   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:19.717419   47907 retry.go:31] will retry after 1.450172017s: waiting for machine to come up
	I0907 00:51:19.892019   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.038702   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.051318   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.391913   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.404956   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.891503   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.891594   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.904473   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.391486   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.391563   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.405726   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.891257   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.891337   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.905422   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.392028   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.392137   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.408621   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.891926   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.892033   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.906116   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.391605   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.391684   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.404834   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.891360   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.891447   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.908340   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:24.391916   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.392007   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.408806   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.633099   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.633200   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.644181   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.133144   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.133227   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.144139   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.612786   46768 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:21.612814   46768 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:21.612826   46768 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:21.612881   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:21.643142   46768 cri.go:89] found id: ""
	I0907 00:51:21.643216   46768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:21.658226   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:21.666895   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:21.666960   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675285   46768 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675317   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:21.817664   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.473084   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.670341   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.752820   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.842789   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:22.842868   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:22.861783   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.383385   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.884041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.384065   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.884077   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:21.168650   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169014   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169037   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:21.168966   47907 retry.go:31] will retry after 2.876055316s: waiting for machine to come up
	I0907 00:51:24.046598   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.046990   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.047020   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:24.046937   47907 retry.go:31] will retry after 2.837607521s: waiting for machine to come up
	I0907 00:51:24.891477   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.891564   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.908102   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.391625   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.391704   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.408399   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.892052   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.892166   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.909608   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.391529   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.391610   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.407459   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.891930   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.891994   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.908217   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.391898   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.404370   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.891918   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.892001   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.904988   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.391570   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:28.391650   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:28.403968   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.868619   46833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:28.868666   46833 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:28.868679   46833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:28.868736   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:28.907258   46833 cri.go:89] found id: ""
	I0907 00:51:28.907332   46833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:28.926539   46833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:28.938760   46833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:28.938837   46833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950550   46833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950576   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:29.092484   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:25.383423   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:25.413853   46768 api_server.go:72] duration metric: took 2.571070768s to wait for apiserver process to appear ...
	I0907 00:51:25.413877   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:25.413895   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.168577   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.168617   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.168629   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.228753   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.228785   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.729501   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.735318   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:29.735345   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:26.886341   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886797   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886819   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:26.886742   47907 retry.go:31] will retry after 3.776269501s: waiting for machine to come up
	I0907 00:51:30.665170   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.665736   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Found IP for machine: 192.168.39.96
	I0907 00:51:30.665770   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserving static IP address...
	I0907 00:51:30.665788   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has current primary IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.666183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.666226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | skip adding static IP to network mk-default-k8s-diff-port-773466 - found existing host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"}
	I0907 00:51:30.666245   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserved static IP address: 192.168.39.96
	I0907 00:51:30.666262   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for SSH to be available...
	I0907 00:51:30.666279   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Getting to WaitForSSH function...
	I0907 00:51:30.668591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.229871   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.240735   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:30.240764   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:30.729911   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.736989   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:51:30.746939   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:30.746964   46768 api_server.go:131] duration metric: took 5.333080985s to wait for apiserver health ...
	I0907 00:51:30.746973   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:30.746979   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:30.748709   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:32.716941   46354 start.go:369] acquired machines lock for "old-k8s-version-940806" in 56.927952192s
	I0907 00:51:32.717002   46354 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:32.717014   46354 fix.go:54] fixHost starting: 
	I0907 00:51:32.717431   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:32.717466   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:32.735021   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I0907 00:51:32.735485   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:32.736057   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:51:32.736083   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:32.736457   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:32.736713   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:32.736903   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:51:32.738719   46354 fix.go:102] recreateIfNeeded on old-k8s-version-940806: state=Stopped err=<nil>
	I0907 00:51:32.738743   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	W0907 00:51:32.738924   46354 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:32.740721   46354 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-940806" ...
	I0907 00:51:32.742202   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Start
	I0907 00:51:32.742362   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring networks are active...
	I0907 00:51:32.743087   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network default is active
	I0907 00:51:32.743499   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network mk-old-k8s-version-940806 is active
	I0907 00:51:32.743863   46354 main.go:141] libmachine: (old-k8s-version-940806) Getting domain xml...
	I0907 00:51:32.744603   46354 main.go:141] libmachine: (old-k8s-version-940806) Creating domain...
	I0907 00:51:30.668969   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.670773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.670838   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH client type: external
	I0907 00:51:30.670876   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa (-rw-------)
	I0907 00:51:30.670918   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:30.670934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | About to run SSH command:
	I0907 00:51:30.670947   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | exit 0
	I0907 00:51:30.770939   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:30.771333   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetConfigRaw
	I0907 00:51:30.772100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:30.775128   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775616   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.775654   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775923   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:51:30.776161   47297 machine.go:88] provisioning docker machine ...
	I0907 00:51:30.776180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:30.776399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776597   47297 buildroot.go:166] provisioning hostname "default-k8s-diff-port-773466"
	I0907 00:51:30.776618   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776805   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.779367   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.779761   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.779793   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.780022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.780238   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780534   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.780687   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.781088   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.781102   47297 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-773466 && echo "default-k8s-diff-port-773466" | sudo tee /etc/hostname
	I0907 00:51:30.932287   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-773466
	
	I0907 00:51:30.932320   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.935703   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936111   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.936146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936324   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.936647   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.936851   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.937054   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.937266   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.937890   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.937932   47297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-773466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-773466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-773466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:31.091619   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:31.091654   47297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:31.091707   47297 buildroot.go:174] setting up certificates
	I0907 00:51:31.091724   47297 provision.go:83] configureAuth start
	I0907 00:51:31.091746   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:31.092066   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:31.095183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095670   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.095710   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095861   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.098597   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.098887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.098962   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.099205   47297 provision.go:138] copyHostCerts
	I0907 00:51:31.099275   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:31.099291   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:31.099362   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:31.099516   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:31.099531   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:31.099563   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:31.099658   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:31.099671   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:31.099700   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:31.099807   47297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-773466 san=[192.168.39.96 192.168.39.96 localhost 127.0.0.1 minikube default-k8s-diff-port-773466]
	I0907 00:51:31.793599   47297 provision.go:172] copyRemoteCerts
	I0907 00:51:31.793653   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:31.793676   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.796773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797153   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.797192   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797362   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:31.797578   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:31.797751   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:31.797865   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:31.903781   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:31.935908   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0907 00:51:31.967385   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:51:31.998542   47297 provision.go:86] duration metric: configureAuth took 906.744341ms
	I0907 00:51:31.998576   47297 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:31.998836   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:31.998941   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.002251   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.002747   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002996   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.003300   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003717   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.003996   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.004637   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.004662   47297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:32.413687   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:32.413765   47297 machine.go:91] provisioned docker machine in 1.637590059s
	I0907 00:51:32.413777   47297 start.go:300] post-start starting for "default-k8s-diff-port-773466" (driver="kvm2")
	I0907 00:51:32.413787   47297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:32.413823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.414183   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:32.414227   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.417432   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.417894   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.417954   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.418202   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.418371   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.418517   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.418625   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.523519   47297 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:32.528959   47297 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:32.528983   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:32.529050   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:32.529144   47297 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:32.529249   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:32.538827   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:32.569792   47297 start.go:303] post-start completed in 156.000078ms
	I0907 00:51:32.569819   47297 fix.go:56] fixHost completed within 20.830399155s
	I0907 00:51:32.569860   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.573180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573599   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.573653   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573846   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.574100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574292   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574470   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.574658   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.575266   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.575282   47297 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:32.716793   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047892.656226759
	
	I0907 00:51:32.716819   47297 fix.go:206] guest clock: 1694047892.656226759
	I0907 00:51:32.716829   47297 fix.go:219] Guest: 2023-09-07 00:51:32.656226759 +0000 UTC Remote: 2023-09-07 00:51:32.569839112 +0000 UTC m=+181.933138455 (delta=86.387647ms)
	I0907 00:51:32.716855   47297 fix.go:190] guest clock delta is within tolerance: 86.387647ms
	I0907 00:51:32.716868   47297 start.go:83] releasing machines lock for "default-k8s-diff-port-773466", held for 20.977496549s
	I0907 00:51:32.716900   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.717205   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:32.720353   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.720794   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.720825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.721001   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721675   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721767   47297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:32.721813   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.721925   47297 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:32.721951   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.724909   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725154   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725464   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725510   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725626   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725808   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.725825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725845   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725869   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725967   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726058   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.726164   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.726216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726352   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.845353   47297 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:32.851616   47297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:33.005642   47297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:33.013527   47297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:33.013603   47297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:33.033433   47297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:33.033467   47297 start.go:466] detecting cgroup driver to use...
	I0907 00:51:33.033538   47297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:33.055861   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:33.073405   47297 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:33.073477   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:33.090484   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:33.104735   47297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:33.245072   47297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:33.411559   47297 docker.go:212] disabling docker service ...
	I0907 00:51:33.411625   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:33.429768   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:33.446597   47297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:33.581915   47297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:33.704648   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:33.721447   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:33.740243   47297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:33.740330   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.750871   47297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:33.750937   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.761620   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.774350   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.787718   47297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:33.802740   47297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:33.814899   47297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:33.814975   47297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:33.832422   47297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:33.844513   47297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:34.020051   47297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:34.252339   47297 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:34.252415   47297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:34.258055   47297 start.go:534] Will wait 60s for crictl version
	I0907 00:51:34.258179   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:51:34.262511   47297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:34.304552   47297 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:34.304626   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.376009   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.448097   47297 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:29.972856   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.178016   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.291593   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.385791   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:30.385865   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.404991   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.926995   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.427043   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.927049   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.426422   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.927274   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.955713   46833 api_server.go:72] duration metric: took 2.569919035s to wait for apiserver process to appear ...
	I0907 00:51:32.955739   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:32.955757   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.956284   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:32.956316   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.957189   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:33.457905   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:30.750097   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:30.784742   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:30.828002   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:30.852490   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:30.852534   46768 system_pods.go:61] "coredns-5dd5756b68-6ndjc" [8f1f8224-b8b4-4fb6-8f6b-2f4a0fb18e17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:30.852547   46768 system_pods.go:61] "etcd-no-preload-321164" [c4b2427c-d882-4d29-af41-553961e5ee48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:30.852559   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [339ca32b-a5a1-474c-a5db-c35e7f87506d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:30.852569   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [36241c8a-13ce-4e68-887b-ed929258d688] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:30.852581   46768 system_pods.go:61] "kube-proxy-f7dm4" [69308cf3-c18e-4edb-b0ea-c7f34a51aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:30.852595   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [e9b14f0e-7789-4d1d-9a15-02c88d4a1e3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:30.852606   46768 system_pods.go:61] "metrics-server-57f55c9bc5-s95n2" [938af7b2-936b-495c-84c9-d580ae646926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:30.852622   46768 system_pods.go:61] "storage-provisioner" [70c690a6-a383-4b3f-9817-954056580009] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:30.852633   46768 system_pods.go:74] duration metric: took 24.608458ms to wait for pod list to return data ...
	I0907 00:51:30.852646   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:30.860785   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:30.860811   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:30.860821   46768 node_conditions.go:105] duration metric: took 8.167675ms to run NodePressure ...
	I0907 00:51:30.860837   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:31.343033   46768 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349908   46768 kubeadm.go:787] kubelet initialised
	I0907 00:51:31.349936   46768 kubeadm.go:788] duration metric: took 6.87538ms waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349944   46768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:31.366931   46768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:33.392559   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:34.449546   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:34.452803   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453196   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:34.453226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453551   47297 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:34.459166   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:34.475045   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:34.475159   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:34.525380   47297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:34.525495   47297 ssh_runner.go:195] Run: which lz4
	I0907 00:51:34.530921   47297 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:34.537992   47297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:34.538062   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:34.298412   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting to get IP...
	I0907 00:51:34.299510   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.300108   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.300166   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.300103   48085 retry.go:31] will retry after 237.599934ms: waiting for machine to come up
	I0907 00:51:34.539798   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.540306   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.540406   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.540348   48085 retry.go:31] will retry after 321.765824ms: waiting for machine to come up
	I0907 00:51:34.864120   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.864735   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.864761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.864698   48085 retry.go:31] will retry after 485.375139ms: waiting for machine to come up
	I0907 00:51:35.351583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.352142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.352174   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.352081   48085 retry.go:31] will retry after 490.428576ms: waiting for machine to come up
	I0907 00:51:35.844432   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.844896   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.844921   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.844821   48085 retry.go:31] will retry after 610.440599ms: waiting for machine to come up
	I0907 00:51:36.456988   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:36.457697   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:36.457720   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:36.457634   48085 retry.go:31] will retry after 704.547341ms: waiting for machine to come up
	I0907 00:51:37.163551   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.163973   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.164001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.163926   48085 retry.go:31] will retry after 825.931424ms: waiting for machine to come up
	I0907 00:51:37.991936   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.992550   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.992583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.992489   48085 retry.go:31] will retry after 952.175868ms: waiting for machine to come up
	I0907 00:51:37.065943   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.065973   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.065987   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.176178   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.176213   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.457739   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.464386   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.464423   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:37.958094   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.966530   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.966561   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:38.458170   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:38.465933   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:51:38.477109   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:38.477135   46833 api_server.go:131] duration metric: took 5.521389594s to wait for apiserver health ...
	I0907 00:51:38.477143   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:38.477149   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:38.478964   46833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:38.480383   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:38.509844   46833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:38.549403   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:38.571430   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:38.571472   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:38.571491   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:38.571503   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:38.571563   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:38.571575   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:38.571592   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:38.571602   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:38.571613   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:38.571626   46833 system_pods.go:74] duration metric: took 22.19998ms to wait for pod list to return data ...
	I0907 00:51:38.571637   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:38.581324   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:38.581361   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:38.581373   46833 node_conditions.go:105] duration metric: took 9.730463ms to run NodePressure ...
	I0907 00:51:38.581393   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:39.140602   46833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:39.147994   46833 kubeadm.go:787] kubelet initialised
	I0907 00:51:39.148025   46833 kubeadm.go:788] duration metric: took 7.397807ms waiting for restarted kubelet to initialise ...
	I0907 00:51:39.148034   46833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:39.157241   46833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.172898   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172935   46833 pod_ready.go:81] duration metric: took 15.665673ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.172947   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172958   46833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.180630   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180666   46833 pod_ready.go:81] duration metric: took 7.698054ms waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.180679   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180692   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.202626   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202658   46833 pod_ready.go:81] duration metric: took 21.956163ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.202671   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202699   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.210817   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210849   46833 pod_ready.go:81] duration metric: took 8.138129ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.210860   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210882   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.801924   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801951   46833 pod_ready.go:81] duration metric: took 591.060955ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.801963   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801970   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:35.403877   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.394774   46768 pod_ready.go:92] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:36.394823   46768 pod_ready.go:81] duration metric: took 5.027852065s waiting for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:36.394839   46768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:38.429614   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.550649   47297 crio.go:444] Took 2.019779 seconds to copy over tarball
	I0907 00:51:36.550726   47297 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:40.133828   47297 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.583074443s)
	I0907 00:51:40.133861   47297 crio.go:451] Took 3.583177 seconds to extract the tarball
	I0907 00:51:40.133872   47297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:40.177675   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:40.230574   47297 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:40.230594   47297 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:40.230654   47297 ssh_runner.go:195] Run: crio config
	I0907 00:51:40.296445   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:51:40.296473   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:40.296497   47297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:40.296519   47297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-773466 NodeName:default-k8s-diff-port-773466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:40.296709   47297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-773466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:40.296793   47297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-773466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0907 00:51:40.296850   47297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:40.307543   47297 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:40.307642   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:40.318841   47297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0907 00:51:40.337125   47297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:40.354910   47297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0907 00:51:40.375283   47297 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:40.380206   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:40.394943   47297 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466 for IP: 192.168.39.96
	I0907 00:51:40.394980   47297 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.395194   47297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:40.395231   47297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:40.395295   47297 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.key
	I0907 00:51:40.410649   47297 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key.e8bbde58
	I0907 00:51:40.410724   47297 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key
	I0907 00:51:40.410868   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:40.410904   47297 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:40.410916   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:40.410942   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:40.410963   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:40.410985   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:40.411038   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:40.411575   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:40.441079   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:51:40.465854   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:40.495221   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:40.521493   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:40.548227   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:40.574366   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:40.599116   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:40.624901   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:40.650606   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:40.690154   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690183   46833 pod_ready.go:81] duration metric: took 888.205223ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.690194   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690204   46833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:40.697723   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697750   46833 pod_ready.go:81] duration metric: took 7.538932ms waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.697761   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697773   46833 pod_ready.go:38] duration metric: took 1.549726748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:40.697793   46833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:51:40.709255   46833 ops.go:34] apiserver oom_adj: -16
	I0907 00:51:40.709281   46833 kubeadm.go:640] restartCluster took 21.865468537s
	I0907 00:51:40.709290   46833 kubeadm.go:406] StartCluster complete in 21.917781616s
	I0907 00:51:40.709309   46833 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.709403   46833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:51:40.712326   46833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.808025   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:51:40.808158   46833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:51:40.808236   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:40.808285   46833 addons.go:69] Setting metrics-server=true in profile "embed-certs-546209"
	I0907 00:51:40.808309   46833 addons.go:231] Setting addon metrics-server=true in "embed-certs-546209"
	W0907 00:51:40.808317   46833 addons.go:240] addon metrics-server should already be in state true
	I0907 00:51:40.808252   46833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-546209"
	I0907 00:51:40.808340   46833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-546209"
	W0907 00:51:40.808354   46833 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:51:40.808375   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808390   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808257   46833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-546209"
	I0907 00:51:40.808493   46833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-546209"
	I0907 00:51:40.809864   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.809936   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810411   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810477   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810518   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810526   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.827159   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0907 00:51:40.827608   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0907 00:51:40.827784   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828059   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828326   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828354   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828556   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828579   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828955   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829067   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829670   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.829715   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.829932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.831070   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0907 00:51:40.831543   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.832142   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.832161   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.832527   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.834743   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.834801   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.853510   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0907 00:51:40.854194   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0907 00:51:40.854261   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.854987   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855019   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.855102   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.855381   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.855745   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.855791   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855808   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.856430   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.856882   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.858468   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.154848   46833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:51:40.859116   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.300012   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:51:41.362259   46833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:41.362296   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:51:41.362332   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.460930   46833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.460961   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:51:41.460988   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.464836   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465151   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465419   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465455   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465590   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465621   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465764   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465979   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466055   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466196   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466276   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.466309   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.587470   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.594683   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:51:41.594709   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:51:41.621438   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:51:41.621471   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:51:41.664886   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.664910   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:51:41.691795   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.886942   46833 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.078877765s)
	I0907 00:51:41.887038   46833 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:51:41.898851   46833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-546209" context rescaled to 1 replicas
	I0907 00:51:41.898900   46833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:51:42.014441   46833 out.go:177] * Verifying Kubernetes components...
	I0907 00:51:38.946740   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:38.947268   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:38.947292   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:38.947211   48085 retry.go:31] will retry after 1.334104337s: waiting for machine to come up
	I0907 00:51:40.282730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:40.283209   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:40.283233   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:40.283168   48085 retry.go:31] will retry after 1.521256667s: waiting for machine to come up
	I0907 00:51:41.806681   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:41.807182   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:41.807211   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:41.807126   48085 retry.go:31] will retry after 1.907600342s: waiting for machine to come up
	I0907 00:51:42.132070   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:51:42.150876   46833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-546209"
	W0907 00:51:42.150905   46833 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:51:42.150935   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:42.151329   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.151357   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.172605   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0907 00:51:42.173122   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.173662   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.173709   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.174155   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.174813   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.174877   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.196701   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0907 00:51:42.197287   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.197859   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.197882   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.198246   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.198418   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:42.200558   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:42.200942   46833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:42.200954   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:51:42.200967   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:42.204259   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.204952   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:42.204975   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:42.205009   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.205139   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:42.205280   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:42.205405   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:42.377838   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:43.286666   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.699154782s)
	I0907 00:51:43.286720   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.286734   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.287148   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.287174   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.287190   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.287210   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.287220   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.288970   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.289008   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.289021   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.436691   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.744844788s)
	I0907 00:51:43.436717   46833 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.304610389s)
	I0907 00:51:43.436744   46833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:43.436758   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436775   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.436862   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05899604s)
	I0907 00:51:43.436883   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436893   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438856   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.438887   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438903   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438907   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438914   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438919   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438924   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438934   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439020   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.439206   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439219   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439231   46833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-546209"
	I0907 00:51:43.439266   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439277   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439290   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.439299   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439502   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439513   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.442917   46833 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0907 00:51:43.444226   46833 addons.go:502] enable addons completed in 2.636061813s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0907 00:51:40.924494   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:42.925582   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:40.679951   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:40.859542   47297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:40.881658   47297 ssh_runner.go:195] Run: openssl version
	I0907 00:51:40.888518   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:40.902200   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908038   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908106   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.914418   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:40.927511   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:40.941360   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947556   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947622   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.953780   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:40.966576   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:40.981447   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989719   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989779   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:41.000685   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:41.017936   47297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:41.023280   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:41.029915   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:41.038011   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:41.044570   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:41.052534   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:41.060580   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:41.068664   47297 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:41.068776   47297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:41.068897   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:41.111849   47297 cri.go:89] found id: ""
	I0907 00:51:41.111923   47297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:41.126171   47297 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:41.126193   47297 kubeadm.go:636] restartCluster start
	I0907 00:51:41.126249   47297 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:41.138401   47297 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.139882   47297 kubeconfig.go:92] found "default-k8s-diff-port-773466" server: "https://192.168.39.96:8444"
	I0907 00:51:41.142907   47297 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:41.154285   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.154346   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.168992   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.169012   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.169057   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.183283   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.683942   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.684036   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.701647   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.183800   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.183882   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.213176   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.683460   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.683550   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.701805   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.184099   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.184206   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.202359   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.683466   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.683541   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.697133   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.183663   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.183750   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.201236   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.684320   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.684411   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.698198   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:45.183451   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.183533   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.197529   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.716005   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:43.716632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:43.716668   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:43.716570   48085 retry.go:31] will retry after 3.526983217s: waiting for machine to come up
	I0907 00:51:47.245213   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:47.245615   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:47.245645   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:47.245561   48085 retry.go:31] will retry after 3.453934877s: waiting for machine to come up
	I0907 00:51:45.450760   46833 node_ready.go:58] node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:47.949024   46833 node_ready.go:49] node "embed-certs-546209" has status "Ready":"True"
	I0907 00:51:47.949053   46833 node_ready.go:38] duration metric: took 4.512298071s waiting for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:47.949063   46833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:47.956755   46833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964323   46833 pod_ready.go:92] pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:47.964345   46833 pod_ready.go:81] duration metric: took 7.56298ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964356   46833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425347   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.425370   46768 pod_ready.go:81] duration metric: took 9.030524984s waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425380   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432508   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.432531   46768 pod_ready.go:81] duration metric: took 7.145112ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432545   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441245   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.441265   46768 pod_ready.go:81] duration metric: took 8.713177ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441275   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446603   46768 pod_ready.go:92] pod "kube-proxy-f7dm4" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.446627   46768 pod_ready.go:81] duration metric: took 5.346628ms waiting for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446641   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453061   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.453091   46768 pod_ready.go:81] duration metric: took 6.442457ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453104   46768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.730093   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:45.684191   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.684287   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.702020   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.183587   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.183697   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.201390   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.683442   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.683519   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.699015   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.183908   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.183998   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.196617   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.683929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.683991   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.696499   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.183929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.184000   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.197425   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.683932   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.684019   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.696986   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.184149   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.184224   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.197363   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.684066   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.684152   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.697853   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.183372   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.183490   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.195818   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.700500   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:50.700920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:50.700939   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:50.700882   48085 retry.go:31] will retry after 4.6319983s: waiting for machine to come up
	I0907 00:51:49.984505   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:51.987061   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:53.485331   46833 pod_ready.go:92] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.485356   46833 pod_ready.go:81] duration metric: took 5.520993929s waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.485368   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491351   46833 pod_ready.go:92] pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.491371   46833 pod_ready.go:81] duration metric: took 5.996687ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491387   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496425   46833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.496448   46833 pod_ready.go:81] duration metric: took 5.054087ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496460   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504963   46833 pod_ready.go:92] pod "kube-proxy-47255" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.504982   46833 pod_ready.go:81] duration metric: took 8.515814ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504990   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550180   46833 pod_ready.go:92] pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.550208   46833 pod_ready.go:81] duration metric: took 45.211992ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550222   46833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:50.229069   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:52.233340   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:54.728824   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:50.683740   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.683806   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.695528   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:51.154940   47297 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:51.154990   47297 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:51.155002   47297 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:51.155052   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:51.190293   47297 cri.go:89] found id: ""
	I0907 00:51:51.190351   47297 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:51.207237   47297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:51.216623   47297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:51.216671   47297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226376   47297 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226399   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.352763   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.879625   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.090367   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.169714   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.258757   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:52.258861   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.274881   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.799083   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.298600   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.798807   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.299419   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.798660   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.824175   47297 api_server.go:72] duration metric: took 2.565415526s to wait for apiserver process to appear ...
	I0907 00:51:54.824203   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:54.824222   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:55.335922   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336311   46354 main.go:141] libmachine: (old-k8s-version-940806) Found IP for machine: 192.168.83.245
	I0907 00:51:55.336325   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserving static IP address...
	I0907 00:51:55.336336   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has current primary IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336816   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.336872   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserved static IP address: 192.168.83.245
	I0907 00:51:55.336893   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | skip adding static IP to network mk-old-k8s-version-940806 - found existing host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"}
	I0907 00:51:55.336909   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting for SSH to be available...
	I0907 00:51:55.336919   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Getting to WaitForSSH function...
	I0907 00:51:55.339323   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.339768   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339880   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH client type: external
	I0907 00:51:55.339907   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa (-rw-------)
	I0907 00:51:55.339946   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:55.339964   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | About to run SSH command:
	I0907 00:51:55.340001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | exit 0
	I0907 00:51:55.483023   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:55.483362   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetConfigRaw
	I0907 00:51:55.484121   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.487091   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487590   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.487621   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487863   46354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:51:55.488067   46354 machine.go:88] provisioning docker machine ...
	I0907 00:51:55.488088   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:55.488332   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488525   46354 buildroot.go:166] provisioning hostname "old-k8s-version-940806"
	I0907 00:51:55.488551   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488707   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.491136   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491567   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.491600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491818   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.491950   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492058   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492133   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.492237   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.492685   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.492705   46354 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-940806 && echo "old-k8s-version-940806" | sudo tee /etc/hostname
	I0907 00:51:55.648589   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-940806
	
	I0907 00:51:55.648628   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.651624   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652046   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.652094   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652282   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.652472   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652654   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652813   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.652977   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.653628   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.653657   46354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-940806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-940806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-940806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:55.805542   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:55.805573   46354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:55.805607   46354 buildroot.go:174] setting up certificates
	I0907 00:51:55.805617   46354 provision.go:83] configureAuth start
	I0907 00:51:55.805629   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.805907   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.808800   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.809175   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809299   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.811385   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811785   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.811812   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811980   46354 provision.go:138] copyHostCerts
	I0907 00:51:55.812089   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:55.812104   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:55.812172   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:55.812287   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:55.812297   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:55.812321   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:55.812418   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:55.812427   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:55.812463   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:55.812538   46354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-940806 san=[192.168.83.245 192.168.83.245 localhost 127.0.0.1 minikube old-k8s-version-940806]
	I0907 00:51:55.920274   46354 provision.go:172] copyRemoteCerts
	I0907 00:51:55.920327   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:55.920348   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.923183   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923599   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.923632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923816   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.924011   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.924174   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.924335   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.020317   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:56.048299   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:51:56.075483   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:56.101118   46354 provision.go:86] duration metric: configureAuth took 295.488336ms
	I0907 00:51:56.101150   46354 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:56.101338   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:51:56.101407   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.104235   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.104640   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104878   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.105093   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105306   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105495   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.105668   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.106199   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.106217   46354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:56.435571   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:56.435644   46354 machine.go:91] provisioned docker machine in 947.562946ms
	I0907 00:51:56.435662   46354 start.go:300] post-start starting for "old-k8s-version-940806" (driver="kvm2")
	I0907 00:51:56.435679   46354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:56.435712   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.436041   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:56.436083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.439187   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439537   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.439563   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439888   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.440116   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.440285   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.440427   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.542162   46354 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:56.546357   46354 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:56.546375   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:56.546435   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:56.546511   46354 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:56.546648   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:56.556125   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:56.577844   46354 start.go:303] post-start completed in 142.166343ms
	I0907 00:51:56.577874   46354 fix.go:56] fixHost completed within 23.860860531s
	I0907 00:51:56.577898   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.580726   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581062   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.581090   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581221   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.581540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581742   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.582113   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.582532   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.582553   46354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:56.715584   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047916.695896692
	
	I0907 00:51:56.715607   46354 fix.go:206] guest clock: 1694047916.695896692
	I0907 00:51:56.715615   46354 fix.go:219] Guest: 2023-09-07 00:51:56.695896692 +0000 UTC Remote: 2023-09-07 00:51:56.57787864 +0000 UTC m=+363.381197654 (delta=118.018052ms)
	I0907 00:51:56.715632   46354 fix.go:190] guest clock delta is within tolerance: 118.018052ms
	I0907 00:51:56.715639   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 23.998669865s
	I0907 00:51:56.715658   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.715909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:56.718637   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.718992   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.719030   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.719203   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719646   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719852   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719935   46354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:56.719980   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.720050   46354 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:56.720068   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.722463   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722752   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722809   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.722850   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723041   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723208   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723241   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.723282   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723394   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723406   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723599   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.723632   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723797   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723956   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.835700   46354 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:56.841554   46354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:56.988658   46354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:56.995421   46354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:56.995495   46354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:57.011588   46354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:57.011608   46354 start.go:466] detecting cgroup driver to use...
	I0907 00:51:57.011669   46354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:57.029889   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:57.043942   46354 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:57.044002   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:57.056653   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:57.069205   46354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:57.184510   46354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:57.323399   46354 docker.go:212] disabling docker service ...
	I0907 00:51:57.323477   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:57.336506   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:57.348657   46354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:57.464450   46354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:57.577763   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:57.590934   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:57.609445   46354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:51:57.609500   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.619112   46354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:57.619173   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.629272   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.638702   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.648720   46354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:57.659046   46354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:57.667895   46354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:57.667971   46354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:57.681673   46354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:57.690907   46354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:57.801113   46354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:57.978349   46354 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:57.978432   46354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:57.983665   46354 start.go:534] Will wait 60s for crictl version
	I0907 00:51:57.983714   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:51:57.988244   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:58.019548   46354 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:58.019616   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.068229   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.118554   46354 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0907 00:51:58.120322   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:58.122944   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123321   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:58.123377   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123569   46354 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:58.128115   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:58.140862   46354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0907 00:51:58.140933   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:58.182745   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:51:58.182829   46354 ssh_runner.go:195] Run: which lz4
	I0907 00:51:58.188491   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:58.193202   46354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:58.193237   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0907 00:51:55.862451   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.363582   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.511655   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.511686   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:58.511699   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:58.549405   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.549442   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:59.050120   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.057915   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.057946   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:59.550150   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.559928   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.559970   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:52:00.050535   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:52:00.060556   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:52:00.069872   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:52:00.069898   47297 api_server.go:131] duration metric: took 5.245689478s to wait for apiserver health ...
	I0907 00:52:00.069906   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:52:00.069911   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:00.071700   47297 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:56.730172   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.731973   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:00.073858   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:00.098341   47297 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:00.120355   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:00.137820   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:52:00.137936   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:52:00.137967   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:52:00.137989   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:52:00.138007   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:52:00.138018   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:52:00.138032   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:52:00.138045   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:52:00.138058   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:52:00.138069   47297 system_pods.go:74] duration metric: took 17.695163ms to wait for pod list to return data ...
	I0907 00:52:00.138082   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:00.145755   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:00.145790   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:00.145803   47297 node_conditions.go:105] duration metric: took 7.711411ms to run NodePressure ...
	I0907 00:52:00.145825   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:00.468823   47297 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476107   47297 kubeadm.go:787] kubelet initialised
	I0907 00:52:00.476130   47297 kubeadm.go:788] duration metric: took 7.282541ms waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476138   47297 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:00.483366   47297 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.495045   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495072   47297 pod_ready.go:81] duration metric: took 11.633116ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.495083   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495092   47297 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.500465   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500488   47297 pod_ready.go:81] duration metric: took 5.386997ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.500498   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500504   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.507318   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507392   47297 pod_ready.go:81] duration metric: took 6.878563ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.507416   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507436   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.527784   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527820   47297 pod_ready.go:81] duration metric: took 20.36412ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.527833   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527844   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.936895   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936926   47297 pod_ready.go:81] duration metric: took 409.073374ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.936938   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936947   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.325746   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325777   47297 pod_ready.go:81] duration metric: took 388.819699ms waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.325787   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325798   47297 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.725791   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725828   47297 pod_ready.go:81] duration metric: took 400.019773ms waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.725840   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725852   47297 pod_ready.go:38] duration metric: took 1.249702286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:01.725871   47297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:52:01.742792   47297 ops.go:34] apiserver oom_adj: -16
	I0907 00:52:01.742816   47297 kubeadm.go:640] restartCluster took 20.616616394s
	I0907 00:52:01.742825   47297 kubeadm.go:406] StartCluster complete in 20.674170679s
	I0907 00:52:01.742843   47297 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.742936   47297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:52:01.744735   47297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.744998   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:52:01.745113   47297 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:52:01.745212   47297 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745218   47297 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745232   47297 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745240   47297 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:52:01.745232   47297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-773466"
	I0907 00:52:01.745268   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:52:01.745301   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745248   47297 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745432   47297 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745442   47297 addons.go:240] addon metrics-server should already be in state true
	I0907 00:52:01.745489   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745709   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745718   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745753   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745813   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745895   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745930   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.755156   47297 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-773466" context rescaled to 1 replicas
	I0907 00:52:01.755193   47297 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:52:01.757452   47297 out.go:177] * Verifying Kubernetes components...
	I0907 00:52:01.759076   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:52:01.763067   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0907 00:52:01.763578   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.764125   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.764147   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.764483   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.764668   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.764804   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0907 00:52:01.765385   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.765972   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.765988   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.766336   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.768468   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0907 00:52:01.768952   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.768985   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.769339   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.769827   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.769860   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.770129   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.770612   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.770641   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.782323   47297 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.782353   47297 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:52:01.782387   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.782822   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.782858   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.788535   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0907 00:52:01.789169   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.789826   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.789845   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.790158   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0907 00:52:01.790340   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.790544   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.790616   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.791036   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.791055   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.791552   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.791726   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.793270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.796517   47297 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:52:01.794011   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.798239   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:52:01.798266   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:52:01.798291   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800176   47297 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:59.928894   46354 crio.go:444] Took 1.740438 seconds to copy over tarball
	I0907 00:51:59.928974   46354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:52:03.105945   46354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.176929999s)
	I0907 00:52:03.105977   46354 crio.go:451] Took 3.177055 seconds to extract the tarball
	I0907 00:52:03.105987   46354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:52:03.150092   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:52:03.193423   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:52:03.193450   46354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:52:03.193525   46354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.193544   46354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.193564   46354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.193730   46354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.193799   46354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.193802   46354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:52:03.193829   46354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.193736   46354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.194948   46354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.195017   46354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.194949   46354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.195642   46354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.195763   46354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.195814   46354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.195843   46354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:52:03.195874   46354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:01.801952   47297 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.801969   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:52:01.801989   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800897   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0907 00:52:01.801662   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802261   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.802286   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802332   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.802683   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.802922   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.802961   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.803124   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.804246   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.804272   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.804654   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.804870   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805283   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.805314   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805418   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.805448   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.805541   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.805723   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.805889   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.806052   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.822423   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0907 00:52:01.822847   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.823441   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.823459   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.823843   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.824036   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.825740   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.826032   47297 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:01.826051   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:52:01.826076   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.829041   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829284   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.829310   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829407   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.829591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.829712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.830194   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.956646   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:52:01.956669   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:52:01.974183   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.978309   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:02.048672   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:52:02.048708   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:52:02.088069   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:02.088099   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:52:02.142271   47297 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:02.142668   47297 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:52:02.197788   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:03.587076   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.612851341s)
	I0907 00:52:03.587130   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587147   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608805294s)
	I0907 00:52:03.587182   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587210   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587452   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587493   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587514   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587525   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587535   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587751   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587765   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587892   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587905   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587925   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587935   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588252   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.588277   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588285   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.588297   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.588305   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588543   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588555   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648373   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450538249s)
	I0907 00:52:03.648433   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648449   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.648789   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.648824   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.648833   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648848   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648858   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.649118   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.649137   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.649153   47297 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-773466"
	I0907 00:52:03.834785   47297 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:52:00.858996   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:02.861983   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:01.228807   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:03.229017   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:04.154749   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:04.260530   47297 addons.go:502] enable addons completed in 2.51536834s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:52:03.398538   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.480702   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.482201   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.482206   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0907 00:52:03.482815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.484155   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.484815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.698892   46354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0907 00:52:03.698936   46354 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.698938   46354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0907 00:52:03.698965   46354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0907 00:52:03.699028   46354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.698975   46354 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0907 00:52:03.698982   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699069   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699084   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.703734   46354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0907 00:52:03.703764   46354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.703796   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729259   46354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0907 00:52:03.729295   46354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.729331   46354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0907 00:52:03.729366   46354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.729373   46354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0907 00:52:03.729394   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.729398   46354 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.729404   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729336   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729441   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729491   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.729519   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0907 00:52:03.729601   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.791169   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0907 00:52:03.814632   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0907 00:52:03.814660   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.814689   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.814747   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:52:03.814799   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.814839   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0907 00:52:03.814841   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876039   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0907 00:52:03.876095   46354 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0907 00:52:03.876082   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0907 00:52:03.876114   46354 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876153   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0907 00:52:03.876158   46354 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0907 00:52:04.549426   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:05.733437   46354 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.85724297s)
	I0907 00:52:05.733479   46354 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0907 00:52:05.733519   46354 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.184052604s)
	I0907 00:52:05.733568   46354 cache_images.go:92] LoadImages completed in 2.540103614s
	W0907 00:52:05.733639   46354 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0907 00:52:05.733723   46354 ssh_runner.go:195] Run: crio config
	I0907 00:52:05.795752   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:05.795780   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:05.795801   46354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:52:05.795824   46354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-940806 NodeName:old-k8s-version-940806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0907 00:52:05.795975   46354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-940806"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-940806
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.245:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:52:05.796074   46354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-940806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:52:05.796135   46354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0907 00:52:05.807772   46354 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:52:05.807864   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:52:05.818185   46354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0907 00:52:05.835526   46354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:52:05.853219   46354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0907 00:52:05.873248   46354 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I0907 00:52:05.877640   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:52:05.890975   46354 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806 for IP: 192.168.83.245
	I0907 00:52:05.891009   46354 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:05.891171   46354 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:52:05.891226   46354 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:52:05.891327   46354 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.key
	I0907 00:52:05.891407   46354 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key.8de8e89b
	I0907 00:52:05.891459   46354 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key
	I0907 00:52:05.891667   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:52:05.891713   46354 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:52:05.891729   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:52:05.891766   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:52:05.891801   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:52:05.891836   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:52:05.891913   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:52:05.892547   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:52:05.917196   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:52:05.942387   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:52:05.965551   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:52:05.987658   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:52:06.012449   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:52:06.037055   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:52:06.061051   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:52:06.085002   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:52:06.109132   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:52:06.132091   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:52:06.155215   46354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:52:06.173122   46354 ssh_runner.go:195] Run: openssl version
	I0907 00:52:06.178736   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:52:06.189991   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194548   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194596   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.200538   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:52:06.212151   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:52:06.224356   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.229976   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.230037   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.236389   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:52:06.248369   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:52:06.259325   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264451   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264514   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.270564   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:52:06.282506   46354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:52:06.287280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:52:06.293280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:52:06.299272   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:52:06.305342   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:52:06.311194   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:52:06.317634   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:52:06.323437   46354 kubeadm.go:404] StartCluster: {Name:old-k8s-version-940806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:52:06.323591   46354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:52:06.323668   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:06.358285   46354 cri.go:89] found id: ""
	I0907 00:52:06.358357   46354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:52:06.368975   46354 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:52:06.368997   46354 kubeadm.go:636] restartCluster start
	I0907 00:52:06.369060   46354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:52:06.379841   46354 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.380906   46354 kubeconfig.go:92] found "old-k8s-version-940806" server: "https://192.168.83.245:8443"
	I0907 00:52:06.383428   46354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:52:06.393862   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.393912   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.406922   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.406947   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.406995   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.419930   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.920685   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.920763   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.934327   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.420551   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.420652   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.438377   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.920500   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.920598   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.936835   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:05.363807   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.869141   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:05.229666   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.729895   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:09.731464   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:06.656552   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:09.155326   47297 node_ready.go:49] node "default-k8s-diff-port-773466" has status "Ready":"True"
	I0907 00:52:09.155347   47297 node_ready.go:38] duration metric: took 7.013040488s waiting for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:09.155355   47297 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:09.164225   47297 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170406   47297 pod_ready.go:92] pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.170437   47297 pod_ready.go:81] duration metric: took 6.189088ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170450   47297 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178363   47297 pod_ready.go:92] pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.178390   47297 pod_ready.go:81] duration metric: took 7.932283ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178403   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184875   47297 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.184891   47297 pod_ready.go:81] duration metric: took 6.482032ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184900   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192246   47297 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.192265   47297 pod_ready.go:81] duration metric: took 7.359919ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192274   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556032   47297 pod_ready.go:92] pod "kube-proxy-5bh7n" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.556064   47297 pod_ready.go:81] duration metric: took 363.783194ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556077   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:08.420749   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.420813   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.434111   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:08.920795   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.920891   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.934515   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.420076   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.420167   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.433668   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.920090   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.920185   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.934602   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.420086   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.420186   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.434617   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.920124   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.920196   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.933372   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.420990   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.421072   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.435087   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.920579   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.920653   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.933614   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.420100   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.420192   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.434919   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.920816   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.920911   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.934364   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.357508   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.357966   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.358965   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.227826   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.228106   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:11.862581   47297 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.363573   47297 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:12.363593   47297 pod_ready.go:81] duration metric: took 2.807509276s waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:12.363602   47297 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:14.763624   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:13.420355   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.420427   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.434047   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:13.920675   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.920757   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.933725   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.420169   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.420244   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.433012   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.920490   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.920603   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.934208   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.420724   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.420807   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.433542   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.920040   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.920114   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.933104   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:16.394845   46354 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:52:16.394878   46354 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:52:16.394891   46354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:52:16.394939   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:16.430965   46354 cri.go:89] found id: ""
	I0907 00:52:16.431029   46354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:52:16.449241   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:52:16.459891   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:52:16.459973   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470006   46354 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470033   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:16.591111   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.262647   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.481491   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.601432   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.722907   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:52:17.723000   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:17.735327   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:16.360886   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.860619   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:16.230019   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.230274   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:17.262772   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:19.264986   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.254002   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:18.753686   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.253956   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.290590   46354 api_server.go:72] duration metric: took 1.567681708s to wait for apiserver process to appear ...
	I0907 00:52:19.290614   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:52:19.290632   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291177   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.291217   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291691   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.792323   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:21.357716   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:23.358355   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:20.728569   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:22.730042   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:21.763571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.264990   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.793514   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0907 00:52:24.793568   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:24.939397   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:52:24.939429   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:52:25.292624   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.350968   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.351004   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:25.792573   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.799666   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.799697   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:26.292258   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:26.301200   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:52:26.313982   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:52:26.314007   46354 api_server.go:131] duration metric: took 7.023387143s to wait for apiserver health ...
	I0907 00:52:26.314016   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:26.314021   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:26.316011   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:52:26.317496   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:26.335726   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:26.373988   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:26.393836   46354 system_pods.go:59] 7 kube-system pods found
	I0907 00:52:26.393861   46354 system_pods.go:61] "coredns-5644d7b6d9-56l68" [ab956d84-2998-42a4-b9ed-b71bc43c9730] Running
	I0907 00:52:26.393866   46354 system_pods.go:61] "etcd-old-k8s-version-940806" [6234bc4e-66d0-4fb6-8631-b45ee56b774c] Running
	I0907 00:52:26.393870   46354 system_pods.go:61] "kube-apiserver-old-k8s-version-940806" [303d2368-1964-4bdb-9d46-91602d6c52b4] Running
	I0907 00:52:26.393875   46354 system_pods.go:61] "kube-controller-manager-old-k8s-version-940806" [7a193f1e-8650-453b-bfa5-d4af3a8bfbc3] Running
	I0907 00:52:26.393878   46354 system_pods.go:61] "kube-proxy-2d8pb" [1689f3e9-0487-422e-a450-9c96595cea00] Running
	I0907 00:52:26.393882   46354 system_pods.go:61] "kube-scheduler-old-k8s-version-940806" [cbd69cd2-3fc6-418b-aa4f-ef19b1b903e1] Running
	I0907 00:52:26.393886   46354 system_pods.go:61] "storage-provisioner" [f313e63f-6c39-4b81-86d1-8054fd6af338] Running
	I0907 00:52:26.393891   46354 system_pods.go:74] duration metric: took 19.879283ms to wait for pod list to return data ...
	I0907 00:52:26.393900   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:26.401474   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:26.401502   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:26.401512   46354 node_conditions.go:105] duration metric: took 7.606706ms to run NodePressure ...
	I0907 00:52:26.401529   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:26.811645   46354 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:26.817493   46354 retry.go:31] will retry after 177.884133ms: kubelet not initialised
	I0907 00:52:26.999917   46354 retry.go:31] will retry after 499.371742ms: kubelet not initialised
	I0907 00:52:27.504386   46354 retry.go:31] will retry after 692.030349ms: kubelet not initialised
	I0907 00:52:28.201498   46354 retry.go:31] will retry after 627.806419ms: kubelet not initialised
	I0907 00:52:25.358575   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.860612   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:25.229134   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.230538   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.729637   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:26.764040   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.264855   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:28.841483   46354 retry.go:31] will retry after 1.816521725s: kubelet not initialised
	I0907 00:52:30.664615   46354 retry.go:31] will retry after 1.888537042s: kubelet not initialised
	I0907 00:52:32.559591   46354 retry.go:31] will retry after 1.787314239s: kubelet not initialised
	I0907 00:52:30.358330   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.857719   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.229103   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.229797   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:31.265047   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:33.763354   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.353206   46354 retry.go:31] will retry after 5.20863166s: kubelet not initialised
	I0907 00:52:34.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:37.358005   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.229978   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.728934   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.264389   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.762232   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:39.567124   46354 retry.go:31] will retry after 8.04288108s: kubelet not initialised
	I0907 00:52:39.863004   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:42.359394   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.729770   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.236530   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.762994   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.263094   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.264328   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.616011   46354 retry.go:31] will retry after 4.959306281s: kubelet not initialised
	I0907 00:52:44.858665   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.359722   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.729067   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:48.228533   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.763985   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.263571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.580975   46354 retry.go:31] will retry after 19.653399141s: kubelet not initialised
	I0907 00:52:49.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.360050   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.361428   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.229168   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.229310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.229581   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.263685   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.762390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.857835   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.357322   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.728575   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.228623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.762553   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.263070   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.357560   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.358151   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.228910   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.728870   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.264341   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.764046   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.858279   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:07.861484   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.729314   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.229765   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:06.263532   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.763318   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.241966   46354 kubeadm.go:787] kubelet initialised
	I0907 00:53:12.242006   46354 kubeadm.go:788] duration metric: took 45.430332167s waiting for restarted kubelet to initialise ...
	I0907 00:53:12.242016   46354 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:53:12.247545   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253242   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.253264   46354 pod_ready.go:81] duration metric: took 5.697075ms waiting for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253276   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258467   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.258489   46354 pod_ready.go:81] duration metric: took 5.206456ms waiting for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258497   46354 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264371   46354 pod_ready.go:92] pod "etcd-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.264394   46354 pod_ready.go:81] duration metric: took 5.89143ms waiting for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264406   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269447   46354 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.269467   46354 pod_ready.go:81] duration metric: took 5.053466ms waiting for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269481   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638374   46354 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.638400   46354 pod_ready.go:81] duration metric: took 368.911592ms waiting for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638413   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039158   46354 pod_ready.go:92] pod "kube-proxy-2d8pb" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.039183   46354 pod_ready.go:81] duration metric: took 400.763103ms waiting for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039191   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:10.359605   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.361679   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:10.729293   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.229130   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:11.263595   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.268640   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.439450   46354 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.439477   46354 pod_ready.go:81] duration metric: took 400.279988ms waiting for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.439486   46354 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:15.746303   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.747193   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:14.858056   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:16.860373   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:19.361777   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.730623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:18.229790   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.763744   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.262360   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.246964   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.746507   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:21.361826   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.857891   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.729313   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.228479   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.263551   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:24.762509   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.246087   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:27.745946   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.858658   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.361105   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.732342   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.229971   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:26.763684   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.262971   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.746043   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.746133   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.857617   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.860863   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.728633   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.730094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.264742   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.764483   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.748648   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.246158   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.358908   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.361998   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.229141   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.729367   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.263505   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.264633   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.746190   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.751934   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:39.858993   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:41.860052   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.359421   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.228491   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:42.229143   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.229996   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.766539   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.264325   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.245475   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.245574   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.246524   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.857876   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.859569   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.230037   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.727940   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.763110   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.763211   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.264727   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:49.745339   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:51.746054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.859934   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:53.357432   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.729449   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.729731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.731191   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.763145   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.763847   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.246469   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.746034   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:55.357937   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.856743   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.227742   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.228654   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.764030   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.765416   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.746909   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.246396   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:02.357694   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:04.357907   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.229565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.729229   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.263126   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.764100   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.745703   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:05.745994   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.858308   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:09.357561   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.229604   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.727738   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.262721   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.263088   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.264022   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.246673   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.246999   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.746105   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:11.358384   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:13.358491   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.729593   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.732429   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.762306   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.263152   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:14.746491   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.245728   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.361153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.860338   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.229785   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.730926   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.733515   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.763593   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.264199   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.247271   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:21.251269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.360652   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.860291   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.229545   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.729109   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.264956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.764699   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:23.746737   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.747269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.357166   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.358248   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:26.729136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.226834   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.262945   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.763714   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:28.245784   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:30.245932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.745051   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.357600   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.361871   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:31.227731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:33.727721   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.262586   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.263485   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.745803   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.745877   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.858000   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.859206   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:35.729469   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.227947   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.763348   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.763533   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:39.245567   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.246549   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.859969   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.862293   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.228842   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.230064   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:44.732421   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.263587   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.762536   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.746104   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:46.247106   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.358648   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.858022   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.229847   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:49.729764   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.763352   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.263554   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.745911   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.746370   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.357129   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.357416   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.359626   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.228487   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.728565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.762919   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.764740   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.262939   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:53.248337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.746300   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.858127   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.358102   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.730045   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.227094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:57.263059   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.263696   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:58.247342   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:00.745494   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:02.748481   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.360153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.360737   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.227937   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.235852   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.263956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.246551   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.747587   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.858981   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.861146   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.729711   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.228310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.764163   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.263381   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.263936   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.247504   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.745798   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.360810   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.859446   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.229240   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.728782   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.729856   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.763565   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.263530   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.746534   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.246569   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.356953   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.358790   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:16.732983   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.228136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.264573   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.763137   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.745008   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.745932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.858109   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:22.358258   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.228589   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.729147   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.763580   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.746337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.748262   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:24.860943   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.357823   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.729423   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.731209   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.764235   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.263390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.254786   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.746056   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:29.859827   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:31.861387   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.862627   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.227830   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.227911   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:34.728680   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.762895   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.763333   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.262940   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.247352   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.247638   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.747011   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:36.356562   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:38.358379   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.227942   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.230445   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.264134   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.763848   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.245726   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.246951   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.858763   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.859176   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:41.729215   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.228235   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.263784   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.762310   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.747834   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:46.748669   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.361972   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:47.861601   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.453504   46768 pod_ready.go:81] duration metric: took 4m0.000384981s waiting for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:45.453536   46768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:45.453557   46768 pod_ready.go:38] duration metric: took 4m14.103603262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:45.453586   46768 kubeadm.go:640] restartCluster took 4m33.861797616s
	W0907 00:55:45.453681   46768 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:55:45.453721   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:55:46.762627   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:48.764174   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:49.247771   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:51.747171   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:50.361591   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:52.362641   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.550366   46833 pod_ready.go:81] duration metric: took 4m0.000125687s waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:53.550409   46833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:53.550421   46833 pod_ready.go:38] duration metric: took 4m5.601345022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:53.550444   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:55:53.550477   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:53.550553   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:53.601802   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:53.601823   46833 cri.go:89] found id: ""
	I0907 00:55:53.601831   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:53.601892   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.606465   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:53.606555   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:53.643479   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.643509   46833 cri.go:89] found id: ""
	I0907 00:55:53.643516   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:53.643562   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.648049   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:53.648101   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:53.679620   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:53.679648   46833 cri.go:89] found id: ""
	I0907 00:55:53.679658   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:53.679706   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.684665   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:53.684721   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:53.725282   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.725302   46833 cri.go:89] found id: ""
	I0907 00:55:53.725309   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:53.725364   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.729555   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:53.729627   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:53.761846   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:53.761875   46833 cri.go:89] found id: ""
	I0907 00:55:53.761883   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:53.761930   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.766451   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:53.766523   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:53.800099   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:53.800118   46833 cri.go:89] found id: ""
	I0907 00:55:53.800124   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:53.800168   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.804614   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:53.804676   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:53.841198   46833 cri.go:89] found id: ""
	I0907 00:55:53.841219   46833 logs.go:284] 0 containers: []
	W0907 00:55:53.841225   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:53.841230   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:53.841288   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:53.883044   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:53.883071   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:53.883077   46833 cri.go:89] found id: ""
	I0907 00:55:53.883085   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:53.883133   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.887172   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.891540   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:53.891566   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.944734   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:53.944765   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.979803   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:53.979832   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:54.015131   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:54.015159   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:54.062445   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:54.062478   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:54.097313   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:54.097343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:54.685400   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:54.685442   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:51.262853   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.764766   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.248875   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:56.746538   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.836523   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:54.836555   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:54.885972   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:54.886002   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:54.918966   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:54.919000   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:54.951966   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:54.951996   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:54.991382   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:54.991418   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:55.048526   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:55.048561   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:57.564574   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:55:57.579844   46833 api_server.go:72] duration metric: took 4m15.68090954s to wait for apiserver process to appear ...
	I0907 00:55:57.579867   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:55:57.579899   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:57.579963   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:57.619205   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:57.619225   46833 cri.go:89] found id: ""
	I0907 00:55:57.619235   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:57.619287   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.623884   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:57.623962   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:57.653873   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:57.653899   46833 cri.go:89] found id: ""
	I0907 00:55:57.653907   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:57.653967   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.658155   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:57.658219   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:57.688169   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:57.688195   46833 cri.go:89] found id: ""
	I0907 00:55:57.688203   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:57.688256   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.692208   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:57.692274   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:57.722477   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:57.722498   46833 cri.go:89] found id: ""
	I0907 00:55:57.722505   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:57.722548   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.726875   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:57.726926   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:57.768681   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:57.768709   46833 cri.go:89] found id: ""
	I0907 00:55:57.768718   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:57.768768   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.773562   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:57.773654   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:57.806133   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:57.806158   46833 cri.go:89] found id: ""
	I0907 00:55:57.806166   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:57.806222   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.810401   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:57.810446   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:57.840346   46833 cri.go:89] found id: ""
	I0907 00:55:57.840371   46833 logs.go:284] 0 containers: []
	W0907 00:55:57.840379   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:57.840384   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:57.840435   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:57.869978   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:57.869998   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:57.870002   46833 cri.go:89] found id: ""
	I0907 00:55:57.870008   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:57.870052   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.874945   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.878942   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:57.878964   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:58.015009   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:58.015035   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:58.063331   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:58.063365   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:58.098316   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:58.098343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:58.140312   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:58.140342   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:58.170471   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:58.170499   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:58.217775   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:58.217804   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:58.275681   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:58.275717   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:58.323629   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:58.323663   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:58.360608   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:58.360636   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:58.397158   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:58.397193   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:58.435395   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:58.435425   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:59.023632   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:59.023687   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:55.767692   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:58.262808   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:00.263787   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:59.246042   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.746441   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.540667   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:56:01.548176   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:56:01.549418   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:01.549443   46833 api_server.go:131] duration metric: took 3.969568684s to wait for apiserver health ...
	I0907 00:56:01.549451   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:01.549474   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:01.549546   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:01.579945   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:01.579975   46833 cri.go:89] found id: ""
	I0907 00:56:01.579985   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:56:01.580038   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.584609   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:01.584673   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:01.628626   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:01.628647   46833 cri.go:89] found id: ""
	I0907 00:56:01.628656   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:56:01.628711   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.633293   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:01.633362   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:01.663898   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.663923   46833 cri.go:89] found id: ""
	I0907 00:56:01.663932   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:56:01.663994   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.668130   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:01.668198   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:01.699021   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.699045   46833 cri.go:89] found id: ""
	I0907 00:56:01.699055   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:56:01.699107   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.703470   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:01.703536   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:01.740360   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:01.740387   46833 cri.go:89] found id: ""
	I0907 00:56:01.740396   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:56:01.740450   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.747366   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:01.747445   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:01.783175   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.783218   46833 cri.go:89] found id: ""
	I0907 00:56:01.783226   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:56:01.783267   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.787565   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:01.787628   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:01.822700   46833 cri.go:89] found id: ""
	I0907 00:56:01.822730   46833 logs.go:284] 0 containers: []
	W0907 00:56:01.822740   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:01.822747   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:01.822818   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:01.853909   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:01.853934   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:01.853938   46833 cri.go:89] found id: ""
	I0907 00:56:01.853945   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:56:01.853990   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.858209   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.862034   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:56:01.862053   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.902881   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:56:01.902915   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.937846   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:56:01.937882   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.993495   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:56:01.993526   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:02.029773   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:56:02.029810   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:02.076180   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:02.076210   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:02.133234   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:02.133268   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:02.278183   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:56:02.278209   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:02.325096   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:56:02.325125   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:02.362517   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:56:02.362542   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:02.393393   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:02.393430   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:02.950480   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:02.950521   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:02.967628   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:56:02.967658   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:05.533216   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:05.533249   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.533257   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.533264   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.533271   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.533276   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.533283   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.533292   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.533305   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.533315   46833 system_pods.go:74] duration metric: took 3.983859289s to wait for pod list to return data ...
	I0907 00:56:05.533327   46833 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:05.536806   46833 default_sa.go:45] found service account: "default"
	I0907 00:56:05.536833   46833 default_sa.go:55] duration metric: took 3.496147ms for default service account to be created ...
	I0907 00:56:05.536842   46833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:05.543284   46833 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:05.543310   46833 system_pods.go:89] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.543318   46833 system_pods.go:89] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.543325   46833 system_pods.go:89] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.543332   46833 system_pods.go:89] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.543337   46833 system_pods.go:89] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.543344   46833 system_pods.go:89] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.543355   46833 system_pods.go:89] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.543367   46833 system_pods.go:89] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.543377   46833 system_pods.go:126] duration metric: took 6.528914ms to wait for k8s-apps to be running ...
	I0907 00:56:05.543391   46833 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:05.543437   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:05.559581   46833 system_svc.go:56] duration metric: took 16.174514ms WaitForService to wait for kubelet.
	I0907 00:56:05.559613   46833 kubeadm.go:581] duration metric: took 4m23.660681176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:05.559638   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:05.564521   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:05.564552   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:05.564566   46833 node_conditions.go:105] duration metric: took 4.922449ms to run NodePressure ...
	I0907 00:56:05.564579   46833 start.go:228] waiting for startup goroutines ...
	I0907 00:56:05.564589   46833 start.go:233] waiting for cluster config update ...
	I0907 00:56:05.564609   46833 start.go:242] writing updated cluster config ...
	I0907 00:56:05.564968   46833 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:05.618906   46833 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:05.620461   46833 out.go:177] * Done! kubectl is now configured to use "embed-certs-546209" cluster and "default" namespace by default
	I0907 00:56:02.763702   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:05.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:04.246390   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:06.246925   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:07.762598   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:09.764581   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:08.746379   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:11.246764   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.263747   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.364712   47297 pod_ready.go:81] duration metric: took 4m0.00109115s waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:56:12.364763   47297 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:56:12.364776   47297 pod_ready.go:38] duration metric: took 4m3.209409487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:12.364799   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:12.364833   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:12.364891   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:12.416735   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:12.416760   47297 cri.go:89] found id: ""
	I0907 00:56:12.416767   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:12.416818   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.423778   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:12.423849   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:12.465058   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.465086   47297 cri.go:89] found id: ""
	I0907 00:56:12.465095   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:12.465152   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.471730   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:12.471793   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:12.508984   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.509005   47297 cri.go:89] found id: ""
	I0907 00:56:12.509017   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:12.509073   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.513689   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:12.513745   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:12.550233   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:12.550257   47297 cri.go:89] found id: ""
	I0907 00:56:12.550266   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:12.550325   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.556588   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:12.556665   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:12.598826   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:12.598853   47297 cri.go:89] found id: ""
	I0907 00:56:12.598862   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:12.598913   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.603710   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:12.603778   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:12.645139   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:12.645169   47297 cri.go:89] found id: ""
	I0907 00:56:12.645179   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:12.645236   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.650685   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:12.650755   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:12.686256   47297 cri.go:89] found id: ""
	I0907 00:56:12.686284   47297 logs.go:284] 0 containers: []
	W0907 00:56:12.686291   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:12.686297   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:12.686349   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:12.719614   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.719638   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:12.719645   47297 cri.go:89] found id: ""
	I0907 00:56:12.719655   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:12.719713   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.724842   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.728880   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:12.728899   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.771051   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:12.771081   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.812110   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:12.812140   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.847819   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:12.847845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:13.436674   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:13.436711   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:13.454385   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:13.454425   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:13.617809   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:13.617838   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:13.652209   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:13.652239   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:13.683939   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:13.683977   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:13.730116   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:13.730151   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:13.763253   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:13.763278   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:13.804890   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:13.804918   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:13.861822   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:13.861856   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.242461   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.788701806s)
	I0907 00:56:17.242546   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:17.259241   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:56:17.268943   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:56:17.278094   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:56:17.278138   46768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:56:17.342868   46768 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:56:17.342981   46768 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:56:17.519943   46768 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:56:17.520089   46768 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:56:17.520214   46768 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:56:17.714902   46768 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:56:13.247487   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:15.746162   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.748049   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.716739   46768 out.go:204]   - Generating certificates and keys ...
	I0907 00:56:17.716894   46768 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:56:17.717007   46768 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:56:17.717113   46768 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:56:17.717361   46768 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:56:17.717892   46768 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:56:17.718821   46768 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:56:17.719502   46768 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:56:17.719996   46768 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:56:17.720644   46768 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:56:17.721254   46768 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:56:17.721832   46768 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:56:17.721911   46768 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:56:17.959453   46768 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:56:18.029012   46768 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:56:18.146402   46768 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:56:18.309148   46768 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:56:18.309726   46768 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:56:18.312628   46768 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:56:18.315593   46768 out.go:204]   - Booting up control plane ...
	I0907 00:56:18.315744   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:56:18.315870   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:56:18.317157   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:56:18.336536   46768 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:56:18.336947   46768 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:56:18.337042   46768 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:56:18.472759   46768 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:56:16.415279   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:16.431021   47297 api_server.go:72] duration metric: took 4m14.6757965s to wait for apiserver process to appear ...
	I0907 00:56:16.431047   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:16.431086   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:16.431144   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:16.474048   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:16.474075   47297 cri.go:89] found id: ""
	I0907 00:56:16.474085   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:16.474141   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.478873   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:16.478956   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:16.512799   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.512817   47297 cri.go:89] found id: ""
	I0907 00:56:16.512824   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:16.512880   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.518717   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:16.518812   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:16.553996   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:16.554016   47297 cri.go:89] found id: ""
	I0907 00:56:16.554023   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:16.554066   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.559358   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:16.559422   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:16.598717   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:16.598739   47297 cri.go:89] found id: ""
	I0907 00:56:16.598746   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:16.598821   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.603704   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:16.603766   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:16.646900   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:16.646928   47297 cri.go:89] found id: ""
	I0907 00:56:16.646937   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:16.646995   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.651216   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:16.651287   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:16.681334   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:16.681361   47297 cri.go:89] found id: ""
	I0907 00:56:16.681374   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:16.681429   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.685963   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:16.686028   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:16.720214   47297 cri.go:89] found id: ""
	I0907 00:56:16.720243   47297 logs.go:284] 0 containers: []
	W0907 00:56:16.720253   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:16.720259   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:16.720316   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:16.756411   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:16.756437   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:16.756444   47297 cri.go:89] found id: ""
	I0907 00:56:16.756452   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:16.756512   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.762211   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.767635   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:16.767659   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:16.784092   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:16.784122   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:16.936817   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:16.936845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.979426   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:16.979455   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:17.009878   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:17.009912   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:17.048086   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:17.048113   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:17.103114   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:17.103156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:17.139125   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:17.139163   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:17.181560   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:17.181588   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:17.224815   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:17.224841   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:17.299438   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:17.299474   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.355165   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:17.355197   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:17.403781   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:17.403809   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:20.491060   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:56:20.498573   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:56:20.501753   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:20.501774   47297 api_server.go:131] duration metric: took 4.070720466s to wait for apiserver health ...
	I0907 00:56:20.501782   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:20.501807   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:20.501856   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:20.545524   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:20.545550   47297 cri.go:89] found id: ""
	I0907 00:56:20.545560   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:20.545616   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.552051   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:20.552120   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:20.593019   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:20.593041   47297 cri.go:89] found id: ""
	I0907 00:56:20.593049   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:20.593104   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.598430   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:20.598500   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:20.639380   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:20.639407   47297 cri.go:89] found id: ""
	I0907 00:56:20.639417   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:20.639507   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.645270   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:20.645342   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:20.247030   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:22.247132   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:20.684338   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:20.684368   47297 cri.go:89] found id: ""
	I0907 00:56:20.684378   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:20.684438   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.689465   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:20.689528   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:20.727854   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.727879   47297 cri.go:89] found id: ""
	I0907 00:56:20.727887   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:20.727938   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.733320   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:20.733389   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:20.776584   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:20.776607   47297 cri.go:89] found id: ""
	I0907 00:56:20.776614   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:20.776659   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.781745   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:20.781822   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:20.817720   47297 cri.go:89] found id: ""
	I0907 00:56:20.817746   47297 logs.go:284] 0 containers: []
	W0907 00:56:20.817756   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:20.817763   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:20.817819   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:20.857693   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.857716   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.857723   47297 cri.go:89] found id: ""
	I0907 00:56:20.857732   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:20.857788   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.862242   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.866469   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:20.866489   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.907476   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:20.907514   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.946383   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:20.946418   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.983830   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:20.983858   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:21.572473   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:21.572524   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:21.626465   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:21.626496   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:21.692455   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:21.692491   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:21.712600   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:21.712632   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:21.855914   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:21.855948   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:21.909035   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:21.909068   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:21.961286   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:21.961317   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:22.002150   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:22.002177   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:22.035129   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:22.035156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:24.592419   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:24.592455   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.592460   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.592464   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.592469   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.592473   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.592477   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.592483   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.592489   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.592494   47297 system_pods.go:74] duration metric: took 4.090707422s to wait for pod list to return data ...
	I0907 00:56:24.592501   47297 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:24.596106   47297 default_sa.go:45] found service account: "default"
	I0907 00:56:24.596127   47297 default_sa.go:55] duration metric: took 3.621408ms for default service account to be created ...
	I0907 00:56:24.596134   47297 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:24.601998   47297 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:24.602021   47297 system_pods.go:89] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.602026   47297 system_pods.go:89] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.602032   47297 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.602037   47297 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.602041   47297 system_pods.go:89] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.602046   47297 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.602054   47297 system_pods.go:89] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.602063   47297 system_pods.go:89] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.602069   47297 system_pods.go:126] duration metric: took 5.931212ms to wait for k8s-apps to be running ...
	I0907 00:56:24.602076   47297 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:24.602116   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:24.623704   47297 system_svc.go:56] duration metric: took 21.617229ms WaitForService to wait for kubelet.
	I0907 00:56:24.623734   47297 kubeadm.go:581] duration metric: took 4m22.868513281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:24.623754   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:24.628408   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:24.628435   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:24.628444   47297 node_conditions.go:105] duration metric: took 4.686272ms to run NodePressure ...
	I0907 00:56:24.628454   47297 start.go:228] waiting for startup goroutines ...
	I0907 00:56:24.628460   47297 start.go:233] waiting for cluster config update ...
	I0907 00:56:24.628469   47297 start.go:242] writing updated cluster config ...
	I0907 00:56:24.628735   47297 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:24.683237   47297 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:24.686336   47297 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-773466" cluster and "default" namespace by default
	I0907 00:56:26.977381   46768 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503998 seconds
	I0907 00:56:26.977624   46768 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:56:27.000116   46768 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:56:27.541598   46768 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:56:27.541809   46768 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-321164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:56:28.055045   46768 kubeadm.go:322] [bootstrap-token] Using token: 7x1950.9u417zcplp1q0xai
	I0907 00:56:24.247241   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:26.773163   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:28.056582   46768 out.go:204]   - Configuring RBAC rules ...
	I0907 00:56:28.056725   46768 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:56:28.065256   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:56:28.075804   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:56:28.081996   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:56:28.090825   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:56:28.097257   46768 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:56:28.114787   46768 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:56:28.337001   46768 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:56:28.476411   46768 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:56:28.479682   46768 kubeadm.go:322] 
	I0907 00:56:28.479784   46768 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:56:28.479799   46768 kubeadm.go:322] 
	I0907 00:56:28.479898   46768 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:56:28.479912   46768 kubeadm.go:322] 
	I0907 00:56:28.479943   46768 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:56:28.480046   46768 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:56:28.480143   46768 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:56:28.480163   46768 kubeadm.go:322] 
	I0907 00:56:28.480343   46768 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:56:28.480361   46768 kubeadm.go:322] 
	I0907 00:56:28.480431   46768 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:56:28.480450   46768 kubeadm.go:322] 
	I0907 00:56:28.480544   46768 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:56:28.480656   46768 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:56:28.480783   46768 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:56:28.480796   46768 kubeadm.go:322] 
	I0907 00:56:28.480924   46768 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:56:28.481024   46768 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:56:28.481034   46768 kubeadm.go:322] 
	I0907 00:56:28.481117   46768 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481203   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:56:28.481223   46768 kubeadm.go:322] 	--control-plane 
	I0907 00:56:28.481226   46768 kubeadm.go:322] 
	I0907 00:56:28.481346   46768 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:56:28.481355   46768 kubeadm.go:322] 
	I0907 00:56:28.481453   46768 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481572   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:56:28.482216   46768 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:56:28.482238   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:56:28.482248   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:56:28.484094   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:56:28.485597   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:56:28.537400   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:56:28.577654   46768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:56:28.577734   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.577747   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=no-preload-321164 minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.909178   46768 ops.go:34] apiserver oom_adj: -16
	I0907 00:56:28.920821   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.027812   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.627489   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:30.127554   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.246606   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:31.746291   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:30.627315   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.127759   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.627183   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.127488   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.627464   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.126850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.626901   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.126917   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.626850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:35.127788   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.747054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.747536   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.627454   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.126916   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.626926   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.126845   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.627579   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.126885   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.627849   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.127371   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.627929   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.127775   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.627392   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.760535   46768 kubeadm.go:1081] duration metric: took 12.182860946s to wait for elevateKubeSystemPrivileges.
	I0907 00:56:40.760574   46768 kubeadm.go:406] StartCluster complete in 5m29.209699324s
	I0907 00:56:40.760594   46768 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.760690   46768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:56:40.762820   46768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.763132   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:56:40.763152   46768 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:56:40.763245   46768 addons.go:69] Setting storage-provisioner=true in profile "no-preload-321164"
	I0907 00:56:40.763251   46768 addons.go:69] Setting default-storageclass=true in profile "no-preload-321164"
	I0907 00:56:40.763263   46768 addons.go:231] Setting addon storage-provisioner=true in "no-preload-321164"
	W0907 00:56:40.763271   46768 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:56:40.763272   46768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-321164"
	I0907 00:56:40.763314   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763357   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:56:40.763404   46768 addons.go:69] Setting metrics-server=true in profile "no-preload-321164"
	I0907 00:56:40.763421   46768 addons.go:231] Setting addon metrics-server=true in "no-preload-321164"
	W0907 00:56:40.763428   46768 addons.go:240] addon metrics-server should already be in state true
	I0907 00:56:40.763464   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763718   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763747   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763772   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763793   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763811   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763833   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.781727   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0907 00:56:40.781738   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0907 00:56:40.781741   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0907 00:56:40.782188   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782279   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782332   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782702   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782724   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782856   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782873   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782879   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782894   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.783096   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783306   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783354   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783531   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.783686   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783717   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.783905   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783949   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.801244   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0907 00:56:40.801534   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0907 00:56:40.801961   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802064   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802509   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802529   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802673   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802689   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802942   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803153   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.803218   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803365   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.804775   46768 addons.go:231] Setting addon default-storageclass=true in "no-preload-321164"
	W0907 00:56:40.804798   46768 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:56:40.804828   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.805191   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.805490   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.807809   46768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:56:40.806890   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.809154   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.809188   46768 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:40.809199   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:56:40.809215   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809249   46768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:56:40.810543   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:56:40.810557   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:56:40.810570   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809485   46768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-321164" context rescaled to 1 replicas
	I0907 00:56:40.810637   46768 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:56:40.813528   46768 out.go:177] * Verifying Kubernetes components...
	I0907 00:56:38.246743   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.747015   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.814976   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:40.817948   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818029   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818080   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818100   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818117   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818137   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818156   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818175   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818282   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818348   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818462   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.818676   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.827224   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0907 00:56:40.827578   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.828106   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.828122   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.828464   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.829012   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.829043   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.843423   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0907 00:56:40.843768   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.844218   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.844236   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.844529   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.844735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.846265   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.846489   46768 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:40.846506   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:56:40.846525   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.849325   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849666   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.849704   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849897   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.850103   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.850251   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.850397   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.965966   46768 node_ready.go:35] waiting up to 6m0s for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.966030   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:56:40.997127   46768 node_ready.go:49] node "no-preload-321164" has status "Ready":"True"
	I0907 00:56:40.997149   46768 node_ready.go:38] duration metric: took 31.151467ms waiting for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.997158   46768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:41.010753   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:41.011536   46768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:41.022410   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:56:41.022431   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:56:41.051599   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:41.119566   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:56:41.119594   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:56:41.228422   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:41.228443   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:56:41.321420   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:42.776406   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810334575s)
	I0907 00:56:42.776435   46768 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 00:56:43.385184   46768 pod_ready.go:102] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:43.446190   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435398332s)
	I0907 00:56:43.446240   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.446248   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3946112s)
	I0907 00:56:43.446255   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449355   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449362   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449377   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.449389   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.449406   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449732   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449771   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449787   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450189   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450216   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.450653   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.450672   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450682   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450691   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451532   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.451597   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451619   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451635   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.451648   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451869   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451885   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451895   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689511   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.368045812s)
	I0907 00:56:43.689565   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.689579   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.689952   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.689963   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689974   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.689991   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.690001   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.690291   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.690307   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.690309   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.690322   46768 addons.go:467] Verifying addon metrics-server=true in "no-preload-321164"
	I0907 00:56:43.693105   46768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:56:43.694562   46768 addons.go:502] enable addons completed in 2.931409197s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:56:45.310723   46768 pod_ready.go:92] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.310742   46768 pod_ready.go:81] duration metric: took 4.299181671s waiting for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.310753   46768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316350   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.316373   46768 pod_ready.go:81] duration metric: took 5.614264ms waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316385   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321183   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.321205   46768 pod_ready.go:81] duration metric: took 4.811919ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321216   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326279   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.326297   46768 pod_ready.go:81] duration metric: took 5.0741ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326308   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332665   46768 pod_ready.go:92] pod "kube-proxy-st6n8" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.332687   46768 pod_ready.go:81] duration metric: took 6.372253ms waiting for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332697   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708023   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.708044   46768 pod_ready.go:81] duration metric: took 375.339873ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708051   46768 pod_ready.go:38] duration metric: took 4.710884592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:45.708065   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:45.708106   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:45.725929   46768 api_server.go:72] duration metric: took 4.915250734s to wait for apiserver process to appear ...
	I0907 00:56:45.725950   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:45.725964   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:56:45.731998   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:56:45.733492   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:45.733507   46768 api_server.go:131] duration metric: took 7.552661ms to wait for apiserver health ...
	I0907 00:56:45.733514   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:45.911337   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:45.911374   46768 system_pods.go:61] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:45.911383   46768 system_pods.go:61] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:45.911389   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:45.911397   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:45.911403   46768 system_pods.go:61] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:45.911410   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:45.911421   46768 system_pods.go:61] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:45.911435   46768 system_pods.go:61] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:45.911443   46768 system_pods.go:74] duration metric: took 177.923008ms to wait for pod list to return data ...
	I0907 00:56:45.911455   46768 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:46.107121   46768 default_sa.go:45] found service account: "default"
	I0907 00:56:46.107149   46768 default_sa.go:55] duration metric: took 195.685496ms for default service account to be created ...
	I0907 00:56:46.107159   46768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:46.314551   46768 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:46.314588   46768 system_pods.go:89] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:46.314596   46768 system_pods.go:89] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:46.314603   46768 system_pods.go:89] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:46.314611   46768 system_pods.go:89] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:46.314618   46768 system_pods.go:89] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:46.314624   46768 system_pods.go:89] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:46.314634   46768 system_pods.go:89] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:46.314645   46768 system_pods.go:89] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:46.314653   46768 system_pods.go:126] duration metric: took 207.48874ms to wait for k8s-apps to be running ...
	I0907 00:56:46.314663   46768 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:46.314713   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:46.331286   46768 system_svc.go:56] duration metric: took 16.613382ms WaitForService to wait for kubelet.
	I0907 00:56:46.331316   46768 kubeadm.go:581] duration metric: took 5.520640777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:46.331342   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:46.507374   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:46.507398   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:46.507406   46768 node_conditions.go:105] duration metric: took 176.059527ms to run NodePressure ...
	I0907 00:56:46.507417   46768 start.go:228] waiting for startup goroutines ...
	I0907 00:56:46.507422   46768 start.go:233] waiting for cluster config update ...
	I0907 00:56:46.507433   46768 start.go:242] writing updated cluster config ...
	I0907 00:56:46.507728   46768 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:46.559712   46768 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:46.561693   46768 out.go:177] * Done! kubectl is now configured to use "no-preload-321164" cluster and "default" namespace by default
	I0907 00:56:43.245531   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:45.746168   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:48.247228   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:50.746605   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:52.748264   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:55.246186   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:57.746658   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:00.245358   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:02.246373   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:04.746154   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:07.245583   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:09.246215   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:11.247141   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.247249   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.440321   46354 pod_ready.go:81] duration metric: took 4m0.000811237s waiting for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	E0907 00:57:13.440352   46354 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:57:13.440368   46354 pod_ready.go:38] duration metric: took 4m1.198343499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:13.440395   46354 kubeadm.go:640] restartCluster took 5m7.071390852s
	W0907 00:57:13.440463   46354 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:57:13.440538   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:57:26.505313   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.064737983s)
	I0907 00:57:26.505392   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:26.521194   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:57:26.530743   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:57:26.540431   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:57:26.540473   46354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0907 00:57:26.744360   46354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:57:39.131760   46354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0907 00:57:39.131857   46354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:57:39.131964   46354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:57:39.132110   46354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:57:39.132226   46354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:57:39.132360   46354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:57:39.132501   46354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:57:39.132573   46354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0907 00:57:39.132654   46354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:57:39.134121   46354 out.go:204]   - Generating certificates and keys ...
	I0907 00:57:39.134212   46354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:57:39.134313   46354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:57:39.134422   46354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:57:39.134501   46354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:57:39.134605   46354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:57:39.134688   46354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:57:39.134801   46354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:57:39.134902   46354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:57:39.135010   46354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:57:39.135121   46354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:57:39.135169   46354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:57:39.135241   46354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:57:39.135308   46354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:57:39.135393   46354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:57:39.135512   46354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:57:39.135599   46354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:57:39.135700   46354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:57:39.137273   46354 out.go:204]   - Booting up control plane ...
	I0907 00:57:39.137369   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:57:39.137458   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:57:39.137561   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:57:39.137677   46354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:57:39.137888   46354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:57:39.138013   46354 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503675 seconds
	I0907 00:57:39.138137   46354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:57:39.138249   46354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:57:39.138297   46354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:57:39.138402   46354 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-940806 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0907 00:57:39.138453   46354 kubeadm.go:322] [bootstrap-token] Using token: nfcsq1.o4ef3s2bthacz2l0
	I0907 00:57:39.139754   46354 out.go:204]   - Configuring RBAC rules ...
	I0907 00:57:39.139848   46354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:57:39.139970   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:57:39.140112   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:57:39.140245   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:57:39.140327   46354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:57:39.140393   46354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:57:39.140442   46354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:57:39.140452   46354 kubeadm.go:322] 
	I0907 00:57:39.140525   46354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:57:39.140533   46354 kubeadm.go:322] 
	I0907 00:57:39.140628   46354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:57:39.140635   46354 kubeadm.go:322] 
	I0907 00:57:39.140665   46354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:57:39.140752   46354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:57:39.140822   46354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:57:39.140834   46354 kubeadm.go:322] 
	I0907 00:57:39.140896   46354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:57:39.140960   46354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:57:39.141043   46354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:57:39.141051   46354 kubeadm.go:322] 
	I0907 00:57:39.141159   46354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0907 00:57:39.141262   46354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:57:39.141276   46354 kubeadm.go:322] 
	I0907 00:57:39.141407   46354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141536   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:57:39.141568   46354 kubeadm.go:322]     --control-plane 	  
	I0907 00:57:39.141575   46354 kubeadm.go:322] 
	I0907 00:57:39.141657   46354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:57:39.141665   46354 kubeadm.go:322] 
	I0907 00:57:39.141730   46354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141832   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:57:39.141851   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:57:39.141863   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:57:39.143462   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:57:39.144982   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:57:39.158663   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:57:39.180662   46354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:57:39.180747   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.180749   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=old-k8s-version-940806 minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.208969   46354 ops.go:34] apiserver oom_adj: -16
	I0907 00:57:39.426346   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.545090   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.162127   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.662172   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.162069   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.662164   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.162355   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.662152   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.161862   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.661532   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.162130   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.661948   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.162260   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.662082   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.162345   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.662378   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.162307   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.662556   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.162204   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.661938   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.161608   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.662198   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.162016   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.662392   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.162303   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.662393   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.162510   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.662195   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.162302   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.662427   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.162085   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.662218   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.779895   46354 kubeadm.go:1081] duration metric: took 15.599222217s to wait for elevateKubeSystemPrivileges.
	I0907 00:57:54.779927   46354 kubeadm.go:406] StartCluster complete in 5m48.456500898s
	I0907 00:57:54.779949   46354 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.780038   46354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:57:54.782334   46354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.782624   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:57:54.782772   46354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:57:54.782871   46354 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782890   46354 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782900   46354 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-940806"
	W0907 00:57:54.782908   46354 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:57:54.782918   46354 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-940806"
	W0907 00:57:54.782926   46354 addons.go:240] addon metrics-server should already be in state true
	I0907 00:57:54.782880   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:57:54.782889   46354 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-940806"
	I0907 00:57:54.783049   46354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-940806"
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.783499   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783500   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783528   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783533   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783571   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783599   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.802026   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0907 00:57:54.802487   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803108   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.803131   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0907 00:57:54.803512   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.803674   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803710   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.804184   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.804215   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.804239   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804259   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804311   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804327   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804569   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804668   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804832   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.805067   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.805094   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.821660   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0907 00:57:54.822183   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.822694   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.822720   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.823047   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.823247   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.823707   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0907 00:57:54.824135   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.825021   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.825046   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.825082   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.827174   46354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:57:54.825428   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.828768   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:57:54.828787   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:57:54.828808   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.829357   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.831479   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.833553   46354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:57:54.832288   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.832776   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.834996   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.835038   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.835055   46354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:54.835067   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:57:54.835083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.835140   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.835307   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.835410   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.836403   46354 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-940806"
	W0907 00:57:54.836424   46354 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:57:54.836451   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.836822   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.836851   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.838476   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.838920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.838951   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.839218   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.839540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.839719   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.839896   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.854883   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0907 00:57:54.855311   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.855830   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.855858   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.856244   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.856713   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.856737   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.872940   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0907 00:57:54.873442   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.874030   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.874057   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.874433   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.874665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.876568   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.876928   46354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:54.876947   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:57:54.876966   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.879761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.879993   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.880015   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.880248   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.880424   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.880591   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.880694   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.933915   46354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-940806" context rescaled to 1 replicas
	I0907 00:57:54.933965   46354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:57:54.936214   46354 out.go:177] * Verifying Kubernetes components...
	I0907 00:57:54.937844   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:55.011087   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:57:55.011114   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:57:55.020666   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:55.038411   46354 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.038474   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:57:55.066358   46354 node_ready.go:49] node "old-k8s-version-940806" has status "Ready":"True"
	I0907 00:57:55.066382   46354 node_ready.go:38] duration metric: took 27.94281ms waiting for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.066393   46354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:55.076936   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	I0907 00:57:55.118806   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:57:55.118835   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:57:55.145653   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:55.158613   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:55.158636   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:57:55.214719   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:56.905329   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.884630053s)
	I0907 00:57:56.905379   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905377   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866875113s)
	I0907 00:57:56.905392   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905403   46354 start.go:901] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0907 00:57:56.905417   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759735751s)
	I0907 00:57:56.905441   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905455   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905794   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905842   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905858   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.905878   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.905895   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905910   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905963   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906013   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906037   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906047   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906286   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906340   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906293   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906325   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906436   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906449   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906459   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906630   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906729   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906732   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906749   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.087889   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.873113752s)
	I0907 00:57:57.087946   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.087979   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.088366   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:57.089849   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.089880   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.089892   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.089899   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.090126   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.090146   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.090155   46354 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-940806"
	I0907 00:57:57.093060   46354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:57:57.094326   46354 addons.go:502] enable addons completed in 2.311555161s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:57:57.115594   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:59.609005   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:58:00.605260   46354 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605285   46354 pod_ready.go:81] duration metric: took 5.528319392s waiting for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	E0907 00:58:00.605296   46354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605305   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.623994   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.624020   46354 pod_ready.go:81] duration metric: took 2.01870868s waiting for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.624039   46354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629264   46354 pod_ready.go:92] pod "kube-proxy-bt454" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.629282   46354 pod_ready.go:81] duration metric: took 5.236562ms waiting for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629288   46354 pod_ready.go:38] duration metric: took 7.562884581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:58:02.629301   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:58:02.629339   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:58:02.644494   46354 api_server.go:72] duration metric: took 7.710498225s to wait for apiserver process to appear ...
	I0907 00:58:02.644515   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:58:02.644529   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:58:02.651352   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:58:02.652147   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:58:02.652186   46354 api_server.go:131] duration metric: took 7.646808ms to wait for apiserver health ...
	I0907 00:58:02.652199   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:58:02.656482   46354 system_pods.go:59] 4 kube-system pods found
	I0907 00:58:02.656506   46354 system_pods.go:61] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.656513   46354 system_pods.go:61] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.656524   46354 system_pods.go:61] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.656534   46354 system_pods.go:61] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.656541   46354 system_pods.go:74] duration metric: took 4.333279ms to wait for pod list to return data ...
	I0907 00:58:02.656553   46354 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:58:02.659079   46354 default_sa.go:45] found service account: "default"
	I0907 00:58:02.659102   46354 default_sa.go:55] duration metric: took 2.543265ms for default service account to be created ...
	I0907 00:58:02.659110   46354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:58:02.663028   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.663050   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.663058   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.663069   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.663077   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.663094   46354 retry.go:31] will retry after 205.506153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:02.874261   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.874291   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.874299   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.874309   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.874318   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.874335   46354 retry.go:31] will retry after 265.617543ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.145704   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.145736   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.145745   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.145755   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.145764   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.145782   46354 retry.go:31] will retry after 459.115577ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.610425   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.610458   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.610466   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.610474   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.610482   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.610498   46354 retry.go:31] will retry after 411.97961ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.026961   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.026992   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.026997   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.027004   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.027011   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.027024   46354 retry.go:31] will retry after 633.680519ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.665840   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.665868   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.665877   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.665889   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.665899   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.665916   46354 retry.go:31] will retry after 680.962565ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:05.352621   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:05.352644   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:05.352652   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:05.352699   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:05.352710   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:05.352725   46354 retry.go:31] will retry after 939.996523ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:06.298740   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:06.298765   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:06.298770   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:06.298791   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:06.298803   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:06.298820   46354 retry.go:31] will retry after 1.103299964s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:07.407728   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:07.407753   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:07.407758   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:07.407766   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:07.407772   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:07.407785   46354 retry.go:31] will retry after 1.13694803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:08.550198   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:08.550228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:08.550236   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:08.550245   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:08.550252   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:08.550269   46354 retry.go:31] will retry after 2.240430665s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:10.796203   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:10.796228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:10.796233   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:10.796240   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:10.796246   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:10.796261   46354 retry.go:31] will retry after 2.183105097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:12.985467   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:12.985491   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:12.985500   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:12.985510   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:12.985518   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:12.985535   46354 retry.go:31] will retry after 2.428546683s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:15.419138   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:15.419163   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:15.419168   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:15.419174   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:15.419181   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:15.419195   46354 retry.go:31] will retry after 2.778392129s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:18.202590   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:18.202621   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:18.202629   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:18.202639   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:18.202648   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:18.202670   46354 retry.go:31] will retry after 5.204092587s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:23.412120   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:23.412144   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:23.412157   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:23.412164   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:23.412171   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:23.412187   46354 retry.go:31] will retry after 6.095121382s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:29.513424   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:29.513449   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:29.513454   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:29.513462   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:29.513468   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:29.513482   46354 retry.go:31] will retry after 6.142679131s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:35.662341   46354 system_pods.go:86] 5 kube-system pods found
	I0907 00:58:35.662367   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:35.662372   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:35.662377   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Pending
	I0907 00:58:35.662383   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:35.662390   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:35.662408   46354 retry.go:31] will retry after 10.800349656s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:46.468817   46354 system_pods.go:86] 6 kube-system pods found
	I0907 00:58:46.468845   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:46.468854   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:46.468859   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:46.468867   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:46.468876   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:46.468884   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:46.468901   46354 retry.go:31] will retry after 10.570531489s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:58:57.047784   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:58:57.047865   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:57.047892   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:57.048256   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Pending
	I0907 00:58:57.048272   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Pending
	I0907 00:58:57.048279   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:57.048286   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:57.048301   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:57.048315   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:57.048345   46354 retry.go:31] will retry after 14.06926028s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:59:11.124216   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:59:11.124242   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:59:11.124248   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:59:11.124252   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Running
	I0907 00:59:11.124257   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Running
	I0907 00:59:11.124261   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:59:11.124265   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:59:11.124272   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:59:11.124276   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:59:11.124283   46354 system_pods.go:126] duration metric: took 1m8.465167722s to wait for k8s-apps to be running ...
	I0907 00:59:11.124289   46354 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:59:11.124328   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:59:11.140651   46354 system_svc.go:56] duration metric: took 16.348641ms WaitForService to wait for kubelet.
	I0907 00:59:11.140686   46354 kubeadm.go:581] duration metric: took 1m16.206690472s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:59:11.140714   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:59:11.144185   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:59:11.144212   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:59:11.144224   46354 node_conditions.go:105] duration metric: took 3.50462ms to run NodePressure ...
	I0907 00:59:11.144235   46354 start.go:228] waiting for startup goroutines ...
	I0907 00:59:11.144244   46354 start.go:233] waiting for cluster config update ...
	I0907 00:59:11.144259   46354 start.go:242] writing updated cluster config ...
	I0907 00:59:11.144547   46354 ssh_runner.go:195] Run: rm -f paused
	I0907 00:59:11.194224   46354 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0907 00:59:11.196420   46354 out.go:177] 
	W0907 00:59:11.197939   46354 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0907 00:59:11.199287   46354 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0907 00:59:11.200770   46354 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-940806" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:50:42 UTC, ends at Thu 2023-09-07 01:05:48 UTC. --
	Sep 07 01:05:47 no-preload-321164 crio[712]: time="2023-09-07 01:05:47.671079573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e1a4e4b9-c5a4-4de8-be03-909de45d62a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.008406753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2846b60-8370-4034-a1a7-59c3f26b849c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.008499024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2846b60-8370-4034-a1a7-59c3f26b849c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.008801746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2846b60-8370-4034-a1a7-59c3f26b849c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.057456932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39ddf597-97d2-4639-999c-a8b87fd9583f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.057609436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39ddf597-97d2-4639-999c-a8b87fd9583f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.057785920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39ddf597-97d2-4639-999c-a8b87fd9583f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.102919948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=65f0ca8a-f82f-4c6d-a1aa-fea1a08c61ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.103011023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=65f0ca8a-f82f-4c6d-a1aa-fea1a08c61ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.103181932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=65f0ca8a-f82f-4c6d-a1aa-fea1a08c61ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.137932337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=963474cf-d663-4dcb-b38f-7ed1f2ad1a1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.138041778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=963474cf-d663-4dcb-b38f-7ed1f2ad1a1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.138271070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=963474cf-d663-4dcb-b38f-7ed1f2ad1a1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.173916482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cca78469-1dc0-4bc2-8ba9-d55ef5538f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.174032098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cca78469-1dc0-4bc2-8ba9-d55ef5538f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.174202756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cca78469-1dc0-4bc2-8ba9-d55ef5538f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.220765199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c52d8f3-b5f5-44b2-8463-b0f980193e1d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.220857247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c52d8f3-b5f5-44b2-8463-b0f980193e1d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.221026533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c52d8f3-b5f5-44b2-8463-b0f980193e1d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.261891619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca231552-38ef-4193-8a74-91ca0d7d6f05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.262003787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ca231552-38ef-4193-8a74-91ca0d7d6f05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.262190571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca231552-38ef-4193-8a74-91ca0d7d6f05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.295011359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0080e749-2069-484e-bfa4-dd9f91ffdd40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.295097219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0080e749-2069-484e-bfa4-dd9f91ffdd40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:05:48 no-preload-321164 crio[712]: time="2023-09-07 01:05:48.295266381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0080e749-2069-484e-bfa4-dd9f91ffdd40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f22e91d0ce8dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fb40ca822771b
	8e0183b73848b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   1793a0b969a05
	51811962596db       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   9 minutes ago       Running             kube-proxy                0                   d7a01515c0f42
	0ea3466fd42e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   20b9f6105004e
	4404f1dd3fac9       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   9 minutes ago       Running             kube-scheduler            2                   31c952c775699
	731ac2001421f       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   9 minutes ago       Running             kube-controller-manager   2                   cb5064eb26ab5
	785b52b71f61b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   9 minutes ago       Running             kube-apiserver            2                   a886ce3866e94
	
	* 
	* ==> coredns [8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44258 - 22988 "HINFO IN 890977412813668942.3486118729297504883. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009767887s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-321164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-321164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=no-preload-321164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:56:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-321164
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:05:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:01:55 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:01:55 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:01:55 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:01:55 +0000   Thu, 07 Sep 2023 00:56:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.125
	  Hostname:    no-preload-321164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 509d986e8a774ffdb920ce8b89b0ab68
	  System UUID:                509d986e-8a77-4ffd-b920-ce8b89b0ab68
	  Boot ID:                    a61452df-4bbd-4620-855f-33e6e4674737
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-8tnp7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-321164                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-321164             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-321164    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-st6n8                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-321164             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-57f55c9bc5-vgngs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m30s)  kubelet          Node no-preload-321164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m30s)  kubelet          Node no-preload-321164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m30s)  kubelet          Node no-preload-321164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-321164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-321164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-321164 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s                  kubelet          Node no-preload-321164 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m10s                  kubelet          Node no-preload-321164 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-321164 event: Registered Node no-preload-321164 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.281912] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.350391] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139238] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.395196] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.092536] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.108494] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.142096] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.118733] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.227690] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Sep 7 00:51] systemd-fstab-generator[1219]: Ignoring "noauto" for root device
	[ +10.942270] hrtimer: interrupt took 3413367 ns
	[  +8.689484] kauditd_printk_skb: 29 callbacks suppressed
	[Sep 7 00:56] systemd-fstab-generator[3841]: Ignoring "noauto" for root device
	[  +9.763798] systemd-fstab-generator[4174]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9] <==
	* {"level":"info","ts":"2023-09-07T00:56:22.459261Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.125:2380"}
	{"level":"info","ts":"2023-09-07T00:56:22.459304Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.125:2380"}
	{"level":"info","ts":"2023-09-07T00:56:22.460114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 switched to configuration voters=(17968154048084074708)"}
	{"level":"info","ts":"2023-09-07T00:56:22.460282Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1fb33b3a6db0430d","local-member-id":"f95bb4e8498c60d4","added-peer-id":"f95bb4e8498c60d4","added-peer-peer-urls":["https://192.168.61.125:2380"]}
	{"level":"info","ts":"2023-09-07T00:56:22.458879Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:56:22.461213Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:56:22.461899Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-07T00:56:23.208684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-07T00:56:23.208803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-07T00:56:23.208888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 received MsgPreVoteResp from f95bb4e8498c60d4 at term 1"}
	{"level":"info","ts":"2023-09-07T00:56:23.208949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.208994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 received MsgVoteResp from f95bb4e8498c60d4 at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.209026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became leader at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.209052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f95bb4e8498c60d4 elected leader f95bb4e8498c60d4 at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.211926Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f95bb4e8498c60d4","local-member-attributes":"{Name:no-preload-321164 ClientURLs:[https://192.168.61.125:2379]}","request-path":"/0/members/f95bb4e8498c60d4/attributes","cluster-id":"1fb33b3a6db0430d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:56:23.212169Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:56:23.212731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:56:23.214048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.125:2379"}
	{"level":"info","ts":"2023-09-07T00:56:23.214254Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.214353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:56:23.215918Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1fb33b3a6db0430d","local-member-id":"f95bb4e8498c60d4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216074Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:56:23.216392Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  01:05:48 up 15 min,  0 users,  load average: 0.32, 0.36, 0.28
	Linux no-preload-321164 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a] <==
	* I0907 01:01:25.934373       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0907 01:01:25.934257       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:01:25.936471       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:02:24.788506       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:02:24.788689       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:02:25.935394       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:25.935684       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:02:25.935722       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:02:25.937673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:02:25.937716       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:02:25.937722       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:03:24.789174       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:03:24.789338       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0907 01:04:24.789041       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:04:24.789117       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:04:25.936797       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:25.937004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:04:25.937061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:04:25.938119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:04:25.938202       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:04:25.938229       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:05:24.789207       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:05:24.789377       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d] <==
	* I0907 01:00:10.552927       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:00:40.091696       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:00:40.562644       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:10.098905       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:10.577441       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:01:40.107224       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:01:40.587174       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:02:10.113306       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:10.602954       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:02:32.688333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="266.287µs"
	E0907 01:02:40.120287       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:02:40.613359       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:02:44.679666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="222.081µs"
	E0907 01:03:10.126222       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:10.622870       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:03:40.132901       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:03:40.631946       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:10.139068       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:10.644634       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:04:40.145921       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:04:40.654168       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:05:10.152424       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:05:10.664774       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:05:40.159121       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:05:40.674434       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a] <==
	* I0907 00:56:43.717940       1 server_others.go:69] "Using iptables proxy"
	I0907 00:56:43.732393       1 node.go:141] Successfully retrieved node IP: 192.168.61.125
	I0907 00:56:43.884659       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:56:43.884712       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:56:43.887218       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:56:43.887284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:56:43.887488       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:56:43.887498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:56:43.889388       1 config.go:188] "Starting service config controller"
	I0907 00:56:43.889419       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:56:43.889445       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:56:43.889449       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:56:43.896784       1 config.go:315] "Starting node config controller"
	I0907 00:56:43.896804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:56:43.990637       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0907 00:56:43.990753       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:56:43.997207       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8] <==
	* W0907 00:56:24.993880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:24.994017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:24.998801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:56:24.998848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0907 00:56:25.871000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:56:25.871054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0907 00:56:25.958438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:25.958523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:26.135117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:56:26.135254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0907 00:56:26.190223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0907 00:56:26.190324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0907 00:56:26.226201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:56:26.226309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0907 00:56:26.272869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0907 00:56:26.272962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0907 00:56:26.276524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:56:26.276645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0907 00:56:26.288331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:56:26.288384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0907 00:56:26.299868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:26.299892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:26.513038       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0907 00:56:26.513157       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0907 00:56:28.469948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:50:42 UTC, ends at Thu 2023-09-07 01:05:48 UTC. --
	Sep 07 01:03:11 no-preload-321164 kubelet[4181]: E0907 01:03:11.663070    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:03:23 no-preload-321164 kubelet[4181]: E0907 01:03:23.663337    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:03:28 no-preload-321164 kubelet[4181]: E0907 01:03:28.702657    4181 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:03:28 no-preload-321164 kubelet[4181]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:03:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:03:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:03:38 no-preload-321164 kubelet[4181]: E0907 01:03:38.664955    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:03:49 no-preload-321164 kubelet[4181]: E0907 01:03:49.663072    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:04:00 no-preload-321164 kubelet[4181]: E0907 01:04:00.663111    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:04:11 no-preload-321164 kubelet[4181]: E0907 01:04:11.662747    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:04:23 no-preload-321164 kubelet[4181]: E0907 01:04:23.662846    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:04:28 no-preload-321164 kubelet[4181]: E0907 01:04:28.702737    4181 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:04:28 no-preload-321164 kubelet[4181]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:04:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:04:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:04:38 no-preload-321164 kubelet[4181]: E0907 01:04:38.663142    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:04:51 no-preload-321164 kubelet[4181]: E0907 01:04:51.662830    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:05:05 no-preload-321164 kubelet[4181]: E0907 01:05:05.662739    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:05:19 no-preload-321164 kubelet[4181]: E0907 01:05:19.663302    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:05:28 no-preload-321164 kubelet[4181]: E0907 01:05:28.704273    4181 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:05:28 no-preload-321164 kubelet[4181]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:05:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:05:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:05:30 no-preload-321164 kubelet[4181]: E0907 01:05:30.666617    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:05:43 no-preload-321164 kubelet[4181]: E0907 01:05:43.662668    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	
	* 
	* ==> storage-provisioner [f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb] <==
	* I0907 00:56:44.931286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:56:44.948081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:56:44.948194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:56:44.956926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:56:44.957110       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177!
	I0907 00:56:44.961079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8683b3a-8d26-42a1-bae4-5d58eae1aa63", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177 became leader
	I0907 00:56:45.057961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-321164 -n no-preload-321164
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-321164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vgngs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs: exit status 1 (67.28699ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vgngs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0907 01:01:17.592721   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 01:01:24.846233   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 01:02:47.895535   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 01:04:02.117423   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-940806 -n old-k8s-version-940806
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:08:11.762700156 +0000 UTC m=+5430.484155704
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-940806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-940806 logs -n 25: (1.530289223s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-386196                              | cert-expiration-386196       | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-940806        | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC | 07 Sep 23 00:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:48:30.668905   47297 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:48:30.669040   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669051   47297 out.go:309] Setting ErrFile to fd 2...
	I0907 00:48:30.669055   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669275   47297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:48:30.669849   47297 out.go:303] Setting JSON to false
	I0907 00:48:30.670802   47297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1694042256,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:48:30.670876   47297 start.go:138] virtualization: kvm guest
	I0907 00:48:30.673226   47297 out.go:177] * [default-k8s-diff-port-773466] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:48:30.675018   47297 notify.go:220] Checking for updates...
	I0907 00:48:30.675022   47297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:48:30.676573   47297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:48:30.677899   47297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:48:30.679390   47297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:48:30.680678   47297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:48:30.682324   47297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:48:30.684199   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:48:30.684737   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.684791   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.699093   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0907 00:48:30.699446   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.699961   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.699981   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.700356   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.700531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.700779   47297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:48:30.701065   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.701099   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.715031   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0907 00:48:30.715374   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.715847   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.715866   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.716151   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.716316   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.750129   47297 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:48:30.751568   47297 start.go:298] selected driver: kvm2
	I0907 00:48:30.751584   47297 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.751680   47297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:48:30.752362   47297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.752458   47297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:48:30.765932   47297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:48:30.766254   47297 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:48:30.766285   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:48:30.766297   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:48:30.766312   47297 start_flags.go:321] config:
	{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-77346
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.766449   47297 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.768165   47297 out.go:177] * Starting control plane node default-k8s-diff-port-773466 in cluster default-k8s-diff-port-773466
	I0907 00:48:28.807066   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:30.769579   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:48:30.769605   47297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:48:30.769618   47297 cache.go:57] Caching tarball of preloaded images
	I0907 00:48:30.769690   47297 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:48:30.769700   47297 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:48:30.769802   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:48:30.769965   47297 start.go:365] acquiring machines lock for default-k8s-diff-port-773466: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:48:34.886988   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:37.959093   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:44.039083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:47.111100   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:53.191104   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:56.263090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:02.343026   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:05.415059   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:11.495064   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:14.567091   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:20.647045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:23.719041   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:29.799012   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:32.871070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:38.951073   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:42.023127   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:48.103090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:51.175063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:57.255062   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:00.327063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:06.407045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:09.479083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:15.559056   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:18.631050   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:24.711070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:27.783032   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:30.786864   46768 start.go:369] acquired machines lock for "no-preload-321164" in 3m55.470116528s
	I0907 00:50:30.786911   46768 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:30.786932   46768 fix.go:54] fixHost starting: 
	I0907 00:50:30.787365   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:30.787402   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:30.802096   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0907 00:50:30.802471   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:30.803040   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:50:30.803070   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:30.803390   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:30.803609   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:30.803735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:50:30.805366   46768 fix.go:102] recreateIfNeeded on no-preload-321164: state=Stopped err=<nil>
	I0907 00:50:30.805394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	W0907 00:50:30.805601   46768 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:30.807478   46768 out.go:177] * Restarting existing kvm2 VM for "no-preload-321164" ...
	I0907 00:50:30.784621   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:30.784665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:50:30.786659   46354 machine.go:91] provisioned docker machine in 4m37.428246924s
	I0907 00:50:30.786707   46354 fix.go:56] fixHost completed within 4m37.448613342s
	I0907 00:50:30.786715   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 4m37.448629588s
	W0907 00:50:30.786743   46354 start.go:672] error starting host: provision: host is not running
	W0907 00:50:30.786862   46354 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:50:30.786876   46354 start.go:687] Will try again in 5 seconds ...
	I0907 00:50:30.809015   46768 main.go:141] libmachine: (no-preload-321164) Calling .Start
	I0907 00:50:30.809182   46768 main.go:141] libmachine: (no-preload-321164) Ensuring networks are active...
	I0907 00:50:30.809827   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network default is active
	I0907 00:50:30.810153   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network mk-no-preload-321164 is active
	I0907 00:50:30.810520   46768 main.go:141] libmachine: (no-preload-321164) Getting domain xml...
	I0907 00:50:30.811434   46768 main.go:141] libmachine: (no-preload-321164) Creating domain...
	I0907 00:50:32.024103   46768 main.go:141] libmachine: (no-preload-321164) Waiting to get IP...
	I0907 00:50:32.024955   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.025314   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.025386   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.025302   47622 retry.go:31] will retry after 211.413529ms: waiting for machine to come up
	I0907 00:50:32.238887   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.239424   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.239452   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.239400   47622 retry.go:31] will retry after 306.62834ms: waiting for machine to come up
	I0907 00:50:32.547910   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.548378   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.548409   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.548318   47622 retry.go:31] will retry after 360.126343ms: waiting for machine to come up
	I0907 00:50:32.909809   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.910325   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.910356   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.910259   47622 retry.go:31] will retry after 609.953186ms: waiting for machine to come up
	I0907 00:50:33.522073   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:33.522437   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:33.522467   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:33.522382   47622 retry.go:31] will retry after 526.4152ms: waiting for machine to come up
	I0907 00:50:34.050028   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.050475   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.050503   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.050417   47622 retry.go:31] will retry after 748.311946ms: waiting for machine to come up
	I0907 00:50:34.799933   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.800367   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.800395   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.800321   47622 retry.go:31] will retry after 732.484316ms: waiting for machine to come up
	I0907 00:50:35.788945   46354 start.go:365] acquiring machines lock for old-k8s-version-940806: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:50:35.534154   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:35.534583   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:35.534606   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:35.534535   47622 retry.go:31] will retry after 1.217693919s: waiting for machine to come up
	I0907 00:50:36.754260   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:36.754682   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:36.754711   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:36.754634   47622 retry.go:31] will retry after 1.508287783s: waiting for machine to come up
	I0907 00:50:38.264195   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:38.264607   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:38.264630   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:38.264557   47622 retry.go:31] will retry after 1.481448978s: waiting for machine to come up
	I0907 00:50:39.748383   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:39.748865   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:39.748898   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:39.748803   47622 retry.go:31] will retry after 2.345045055s: waiting for machine to come up
	I0907 00:50:42.095158   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:42.095801   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:42.095832   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:42.095747   47622 retry.go:31] will retry after 3.269083195s: waiting for machine to come up
	I0907 00:50:45.369097   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:45.369534   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:45.369561   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:45.369448   47622 retry.go:31] will retry after 4.462134893s: waiting for machine to come up
	I0907 00:50:49.835862   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836273   46768 main.go:141] libmachine: (no-preload-321164) Found IP for machine: 192.168.61.125
	I0907 00:50:49.836315   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has current primary IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836342   46768 main.go:141] libmachine: (no-preload-321164) Reserving static IP address...
	I0907 00:50:49.836774   46768 main.go:141] libmachine: (no-preload-321164) Reserved static IP address: 192.168.61.125
	I0907 00:50:49.836794   46768 main.go:141] libmachine: (no-preload-321164) Waiting for SSH to be available...
	I0907 00:50:49.836827   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.836860   46768 main.go:141] libmachine: (no-preload-321164) DBG | skip adding static IP to network mk-no-preload-321164 - found existing host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"}
	I0907 00:50:49.836880   46768 main.go:141] libmachine: (no-preload-321164) DBG | Getting to WaitForSSH function...
	I0907 00:50:49.838931   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839299   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.839326   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839464   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH client type: external
	I0907 00:50:49.839500   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa (-rw-------)
	I0907 00:50:49.839538   46768 main.go:141] libmachine: (no-preload-321164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:50:49.839557   46768 main.go:141] libmachine: (no-preload-321164) DBG | About to run SSH command:
	I0907 00:50:49.839568   46768 main.go:141] libmachine: (no-preload-321164) DBG | exit 0
	I0907 00:50:49.930557   46768 main.go:141] libmachine: (no-preload-321164) DBG | SSH cmd err, output: <nil>: 
	I0907 00:50:49.931033   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetConfigRaw
	I0907 00:50:49.931662   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:49.934286   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934719   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.934755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934973   46768 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:50:49.935197   46768 machine.go:88] provisioning docker machine ...
	I0907 00:50:49.935221   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:49.935409   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935567   46768 buildroot.go:166] provisioning hostname "no-preload-321164"
	I0907 00:50:49.935586   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935730   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:49.937619   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.937879   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.937899   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.938049   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:49.938303   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938464   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938624   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:49.938803   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:49.939300   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:49.939315   46768 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-321164 && echo "no-preload-321164" | sudo tee /etc/hostname
	I0907 00:50:50.076488   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-321164
	
	I0907 00:50:50.076513   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.079041   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079362   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.079409   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079614   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.079831   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080013   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080183   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.080361   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.080757   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.080775   46768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-321164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-321164/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-321164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:50:51.203755   46833 start.go:369] acquired machines lock for "embed-certs-546209" in 4m11.274622402s
	I0907 00:50:51.203804   46833 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:51.203823   46833 fix.go:54] fixHost starting: 
	I0907 00:50:51.204233   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:51.204274   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:51.221096   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0907 00:50:51.221487   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:51.222026   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:50:51.222048   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:51.222401   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:51.222595   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:50:51.222757   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:50:51.224388   46833 fix.go:102] recreateIfNeeded on embed-certs-546209: state=Stopped err=<nil>
	I0907 00:50:51.224413   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	W0907 00:50:51.224585   46833 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:51.226812   46833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-546209" ...
	I0907 00:50:50.214796   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:50.215590   46768 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:50:50.215629   46768 buildroot.go:174] setting up certificates
	I0907 00:50:50.215639   46768 provision.go:83] configureAuth start
	I0907 00:50:50.215659   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:50.215952   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:50.218581   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.218947   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.218970   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.219137   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.221833   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222177   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.222221   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222323   46768 provision.go:138] copyHostCerts
	I0907 00:50:50.222377   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:50:50.222390   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:50:50.222497   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:50:50.222628   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:50:50.222646   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:50:50.222682   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:50:50.222765   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:50:50.222784   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:50:50.222817   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:50:50.222880   46768 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.no-preload-321164 san=[192.168.61.125 192.168.61.125 localhost 127.0.0.1 minikube no-preload-321164]
	I0907 00:50:50.456122   46768 provision.go:172] copyRemoteCerts
	I0907 00:50:50.456175   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:50:50.456198   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.458665   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459030   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.459053   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459237   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.459468   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.459630   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.459766   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:50.549146   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:50:50.572002   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 00:50:50.595576   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:50:50.618054   46768 provision.go:86] duration metric: configureAuth took 402.401011ms
	I0907 00:50:50.618086   46768 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:50:50.618327   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:50:50.618410   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.620908   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621255   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.621289   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621432   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.621619   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621752   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621879   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.622006   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.622586   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.622611   46768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:50:50.946938   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:50:50.946964   46768 machine.go:91] provisioned docker machine in 1.011750962s
	I0907 00:50:50.946975   46768 start.go:300] post-start starting for "no-preload-321164" (driver="kvm2")
	I0907 00:50:50.946989   46768 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:50:50.947015   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:50.947339   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:50:50.947367   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.950370   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950754   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.950798   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.951171   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.951331   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.951472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.040440   46768 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:50:51.044700   46768 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:50:51.044728   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:50:51.044816   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:50:51.044899   46768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:50:51.045018   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:50:51.053507   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:50:51.077125   46768 start.go:303] post-start completed in 130.134337ms
	I0907 00:50:51.077149   46768 fix.go:56] fixHost completed within 20.29021748s
	I0907 00:50:51.077174   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.079928   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080266   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.080297   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080516   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.080744   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.080909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.081080   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.081255   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:51.081837   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:51.081853   46768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:50:51.203596   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047851.182131777
	
	I0907 00:50:51.203636   46768 fix.go:206] guest clock: 1694047851.182131777
	I0907 00:50:51.203646   46768 fix.go:219] Guest: 2023-09-07 00:50:51.182131777 +0000 UTC Remote: 2023-09-07 00:50:51.077154021 +0000 UTC m=+255.896364351 (delta=104.977756ms)
	I0907 00:50:51.203664   46768 fix.go:190] guest clock delta is within tolerance: 104.977756ms
	I0907 00:50:51.203668   46768 start.go:83] releasing machines lock for "no-preload-321164", held for 20.416782491s
	I0907 00:50:51.203696   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.203977   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:51.207262   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207708   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.207755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207926   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208563   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208644   46768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:50:51.208692   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.208755   46768 ssh_runner.go:195] Run: cat /version.json
	I0907 00:50:51.208777   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.211412   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211453   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211863   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211901   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211931   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211957   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.212132   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212318   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212406   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212477   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212612   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.212722   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212875   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.300796   46768 ssh_runner.go:195] Run: systemctl --version
	I0907 00:50:51.324903   46768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:50:51.465767   46768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:50:51.471951   46768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:50:51.472036   46768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:50:51.488733   46768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:50:51.488761   46768 start.go:466] detecting cgroup driver to use...
	I0907 00:50:51.488831   46768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:50:51.501772   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:50:51.516019   46768 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:50:51.516083   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:50:51.530425   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:50:51.546243   46768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:50:51.649058   46768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:50:51.768622   46768 docker.go:212] disabling docker service ...
	I0907 00:50:51.768705   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:50:51.785225   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:50:51.797018   46768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:50:51.908179   46768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:50:52.021212   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:50:52.037034   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:50:52.055163   46768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:50:52.055218   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.065451   46768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:50:52.065520   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.076202   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.086865   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.096978   46768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:50:52.107492   46768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:50:52.117036   46768 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:50:52.117104   46768 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:50:52.130309   46768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:50:52.140016   46768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:50:52.249901   46768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:50:52.422851   46768 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:50:52.422928   46768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:50:52.427852   46768 start.go:534] Will wait 60s for crictl version
	I0907 00:50:52.427903   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.431904   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:50:52.472552   46768 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:50:52.472632   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.526514   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.580133   46768 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:50:51.228316   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Start
	I0907 00:50:51.228549   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring networks are active...
	I0907 00:50:51.229311   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network default is active
	I0907 00:50:51.229587   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network mk-embed-certs-546209 is active
	I0907 00:50:51.230001   46833 main.go:141] libmachine: (embed-certs-546209) Getting domain xml...
	I0907 00:50:51.230861   46833 main.go:141] libmachine: (embed-certs-546209) Creating domain...
	I0907 00:50:52.512329   46833 main.go:141] libmachine: (embed-certs-546209) Waiting to get IP...
	I0907 00:50:52.513160   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.513607   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.513709   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.513575   47738 retry.go:31] will retry after 266.575501ms: waiting for machine to come up
	I0907 00:50:52.782236   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.782674   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.782699   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.782623   47738 retry.go:31] will retry after 258.252832ms: waiting for machine to come up
	I0907 00:50:53.042276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.042851   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.042886   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.042799   47738 retry.go:31] will retry after 480.751908ms: waiting for machine to come up
	I0907 00:50:53.525651   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.526280   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.526314   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.526222   47738 retry.go:31] will retry after 592.373194ms: waiting for machine to come up
	I0907 00:50:54.119935   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.120401   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.120440   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.120320   47738 retry.go:31] will retry after 602.269782ms: waiting for machine to come up
	I0907 00:50:54.723919   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.724403   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.724429   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.724356   47738 retry.go:31] will retry after 631.28427ms: waiting for machine to come up
	I0907 00:50:52.581522   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:52.584587   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.584995   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:52.585027   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.585212   46768 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:50:52.589138   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:50:52.602205   46768 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:50:52.602259   46768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:50:52.633785   46768 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:50:52.633808   46768 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:50:52.633868   46768 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.633887   46768 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.633889   46768 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.633929   46768 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0907 00:50:52.633954   46768 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.633849   46768 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.633937   46768 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.634076   46768 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635447   46768 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.635516   46768 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.635529   46768 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.635578   46768 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.635583   46768 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0907 00:50:52.635587   46768 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.868791   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917664   46768 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0907 00:50:52.917705   46768 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917740   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.921520   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.924174   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.924775   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0907 00:50:52.926455   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.927265   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.936511   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.936550   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.989863   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0907 00:50:52.989967   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.081783   46768 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0907 00:50:53.081828   46768 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.081876   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.200951   46768 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0907 00:50:53.200999   46768 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.201037   46768 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0907 00:50:53.201055   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201074   46768 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.201115   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201120   46768 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0907 00:50:53.201138   46768 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.201163   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201196   46768 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0907 00:50:53.201208   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0907 00:50:53.201220   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201222   46768 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:53.201245   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201254   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201257   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.213879   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.213909   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.214030   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.559290   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.356797   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:55.357248   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:55.357276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:55.357208   47738 retry.go:31] will retry after 957.470134ms: waiting for machine to come up
	I0907 00:50:56.316920   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:56.317410   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:56.317437   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:56.317357   47738 retry.go:31] will retry after 929.647798ms: waiting for machine to come up
	I0907 00:50:57.249114   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:57.249599   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:57.249631   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:57.249548   47738 retry.go:31] will retry after 1.218276188s: waiting for machine to come up
	I0907 00:50:58.470046   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:58.470509   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:58.470539   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:58.470461   47738 retry.go:31] will retry after 2.324175972s: waiting for machine to come up
	I0907 00:50:55.219723   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.018454399s)
	I0907 00:50:55.219753   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0907 00:50:55.219835   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0: (2.018563387s)
	I0907 00:50:55.219874   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0907 00:50:55.219897   46768 ssh_runner.go:235] Completed: which crictl: (2.01861063s)
	I0907 00:50:55.219931   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1: (2.006023749s)
	I0907 00:50:55.219956   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:55.219965   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0907 00:50:55.219974   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:55.220018   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.220026   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1: (2.006085999s)
	I0907 00:50:55.220034   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1: (2.005987599s)
	I0907 00:50:55.220056   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0907 00:50:55.220062   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0907 00:50:55.220065   46768 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.660750078s)
	I0907 00:50:55.220091   46768 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0907 00:50:55.220107   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:50:55.220139   46768 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.220178   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:55.220141   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:50:55.263187   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0907 00:50:55.263256   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0907 00:50:55.263276   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263282   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0907 00:50:55.263291   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:50:55.263321   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263334   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0907 00:50:55.263428   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0907 00:50:55.263432   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.275710   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0907 00:50:58.251089   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.987744073s)
	I0907 00:50:58.251119   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0907 00:50:58.251125   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.987662447s)
	I0907 00:50:58.251143   46768 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251164   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0907 00:50:58.251192   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251253   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:50:58.256733   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0907 00:51:00.798145   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:00.798673   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:00.798702   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:00.798607   47738 retry.go:31] will retry after 1.874271621s: waiting for machine to come up
	I0907 00:51:02.674532   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:02.675085   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:02.675117   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:02.675050   47738 retry.go:31] will retry after 2.9595889s: waiting for machine to come up
	I0907 00:51:04.952628   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.701410779s)
	I0907 00:51:04.952741   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0907 00:51:04.952801   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:04.952854   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:05.636309   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:05.636744   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:05.636779   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:05.636694   47738 retry.go:31] will retry after 4.45645523s: waiting for machine to come up
	I0907 00:51:06.100759   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.147880303s)
	I0907 00:51:06.100786   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0907 00:51:06.100803   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:06.100844   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:08.663694   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.56282168s)
	I0907 00:51:08.663725   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0907 00:51:08.663754   46768 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:08.663803   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:10.023202   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.359374479s)
	I0907 00:51:10.023234   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0907 00:51:10.023276   46768 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:10.023349   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:11.739345   47297 start.go:369] acquired machines lock for "default-k8s-diff-port-773466" in 2m40.969329009s
	I0907 00:51:11.739394   47297 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:11.739419   47297 fix.go:54] fixHost starting: 
	I0907 00:51:11.739834   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:11.739870   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:11.755796   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0907 00:51:11.756102   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:11.756564   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:51:11.756588   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:11.756875   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:11.757032   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:11.757185   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:51:11.758750   47297 fix.go:102] recreateIfNeeded on default-k8s-diff-port-773466: state=Stopped err=<nil>
	I0907 00:51:11.758772   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	W0907 00:51:11.758955   47297 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:11.761066   47297 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-773466" ...
	I0907 00:51:10.095825   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096285   46833 main.go:141] libmachine: (embed-certs-546209) Found IP for machine: 192.168.50.242
	I0907 00:51:10.096312   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has current primary IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096321   46833 main.go:141] libmachine: (embed-certs-546209) Reserving static IP address...
	I0907 00:51:10.096706   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.096731   46833 main.go:141] libmachine: (embed-certs-546209) Reserved static IP address: 192.168.50.242
	I0907 00:51:10.096750   46833 main.go:141] libmachine: (embed-certs-546209) DBG | skip adding static IP to network mk-embed-certs-546209 - found existing host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"}
	I0907 00:51:10.096766   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Getting to WaitForSSH function...
	I0907 00:51:10.096777   46833 main.go:141] libmachine: (embed-certs-546209) Waiting for SSH to be available...
	I0907 00:51:10.098896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099227   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.099260   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099360   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH client type: external
	I0907 00:51:10.099382   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa (-rw-------)
	I0907 00:51:10.099412   46833 main.go:141] libmachine: (embed-certs-546209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:10.099428   46833 main.go:141] libmachine: (embed-certs-546209) DBG | About to run SSH command:
	I0907 00:51:10.099444   46833 main.go:141] libmachine: (embed-certs-546209) DBG | exit 0
	I0907 00:51:10.199038   46833 main.go:141] libmachine: (embed-certs-546209) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:10.199377   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetConfigRaw
	I0907 00:51:10.200006   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.202924   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203328   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.203352   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203576   46833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:51:10.203879   46833 machine.go:88] provisioning docker machine ...
	I0907 00:51:10.203908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:10.204125   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204290   46833 buildroot.go:166] provisioning hostname "embed-certs-546209"
	I0907 00:51:10.204312   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204489   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.206898   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207332   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.207365   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207473   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.207643   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207791   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207920   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.208080   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.208476   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.208496   46833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-546209 && echo "embed-certs-546209" | sudo tee /etc/hostname
	I0907 00:51:10.356060   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-546209
	
	I0907 00:51:10.356098   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.359533   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.359867   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.359896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.360097   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.360284   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360435   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360629   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.360820   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.361504   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.361538   46833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-546209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-546209/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-546209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:10.503181   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:10.503211   46833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:10.503238   46833 buildroot.go:174] setting up certificates
	I0907 00:51:10.503246   46833 provision.go:83] configureAuth start
	I0907 00:51:10.503254   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.503555   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.506514   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.506930   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.506955   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.507150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.509772   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510081   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.510111   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510215   46833 provision.go:138] copyHostCerts
	I0907 00:51:10.510281   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:10.510292   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:10.510345   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:10.510438   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:10.510446   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:10.510466   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:10.510552   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:10.510559   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:10.510579   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:10.510638   46833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.embed-certs-546209 san=[192.168.50.242 192.168.50.242 localhost 127.0.0.1 minikube embed-certs-546209]
	I0907 00:51:10.947044   46833 provision.go:172] copyRemoteCerts
	I0907 00:51:10.947101   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:10.947122   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.949879   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950221   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.950251   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.950660   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.950849   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.950993   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.052610   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:11.077082   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0907 00:51:11.100979   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:11.124155   46833 provision.go:86] duration metric: configureAuth took 620.900948ms
	I0907 00:51:11.124176   46833 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:11.124389   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:11.124456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.127163   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127498   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.127536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127813   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.128011   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128201   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128381   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.128560   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.129185   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.129214   46833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:11.467260   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:11.467297   46833 machine.go:91] provisioned docker machine in 1.263400182s
	I0907 00:51:11.467309   46833 start.go:300] post-start starting for "embed-certs-546209" (driver="kvm2")
	I0907 00:51:11.467321   46833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:11.467343   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.467669   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:11.467715   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.470299   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470675   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.470705   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470846   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.471038   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.471191   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.471435   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.568708   46833 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:11.573505   46833 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:11.573533   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:11.573595   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:11.573669   46833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:11.573779   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:11.582612   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.607383   46833 start.go:303] post-start completed in 140.062214ms
	I0907 00:51:11.607400   46833 fix.go:56] fixHost completed within 20.403578781s
	I0907 00:51:11.607419   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.609882   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610233   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.610265   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610411   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.610602   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610792   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610972   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.611161   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.611550   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.611563   46833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:11.739146   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047871.687486971
	
	I0907 00:51:11.739167   46833 fix.go:206] guest clock: 1694047871.687486971
	I0907 00:51:11.739176   46833 fix.go:219] Guest: 2023-09-07 00:51:11.687486971 +0000 UTC Remote: 2023-09-07 00:51:11.607403696 +0000 UTC m=+271.818672785 (delta=80.083275ms)
	I0907 00:51:11.739196   46833 fix.go:190] guest clock delta is within tolerance: 80.083275ms
	I0907 00:51:11.739202   46833 start.go:83] releasing machines lock for "embed-certs-546209", held for 20.535419293s
	I0907 00:51:11.739232   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.739478   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:11.742078   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742446   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.742474   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742676   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743172   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743342   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743422   46833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:11.743470   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.743541   46833 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:11.743573   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.746120   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746484   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.746516   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746640   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.746843   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.746989   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747015   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.747044   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.747169   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.747179   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.747394   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.747556   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747717   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.839831   46833 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:11.861736   46833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:12.006017   46833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:12.011678   46833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:12.011739   46833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:12.026851   46833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:12.026871   46833 start.go:466] detecting cgroup driver to use...
	I0907 00:51:12.026934   46833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:12.040077   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:12.052962   46833 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:12.053018   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:12.066509   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:12.079587   46833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:12.189043   46833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:12.310997   46833 docker.go:212] disabling docker service ...
	I0907 00:51:12.311065   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:12.324734   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:12.336808   46833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:12.461333   46833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:12.584841   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:12.598337   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:12.615660   46833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:12.615736   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.626161   46833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:12.626232   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.637475   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.647631   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.658444   46833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:12.669167   46833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:12.678558   46833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:12.678614   46833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:12.692654   46833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:12.703465   46833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:12.820819   46833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:12.996574   46833 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:12.996650   46833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:13.002744   46833 start.go:534] Will wait 60s for crictl version
	I0907 00:51:13.002818   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:51:13.007287   46833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:13.042173   46833 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:13.042254   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.090562   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.145112   46833 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:13.146767   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:13.149953   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150357   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:13.150388   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150603   46833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:13.154792   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:13.166540   46833 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:13.166607   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:13.203316   46833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:13.203391   46833 ssh_runner.go:195] Run: which lz4
	I0907 00:51:13.207399   46833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:13.211826   46833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:13.211854   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:10.979891   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0907 00:51:10.979935   46768 cache_images.go:123] Successfully loaded all cached images
	I0907 00:51:10.979942   46768 cache_images.go:92] LoadImages completed in 18.346122768s
	I0907 00:51:10.980017   46768 ssh_runner.go:195] Run: crio config
	I0907 00:51:11.044573   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:11.044595   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:11.044612   46768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:11.044630   46768 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-321164 NodeName:no-preload-321164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:11.044749   46768 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-321164"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:11.044807   46768 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-321164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:11.044852   46768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:11.055469   46768 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:11.055527   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:11.063642   46768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0907 00:51:11.081151   46768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:11.098623   46768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0907 00:51:11.116767   46768 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:11.120552   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:11.133845   46768 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164 for IP: 192.168.61.125
	I0907 00:51:11.133876   46768 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:11.134026   46768 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:11.134092   46768 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:11.134173   46768 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.key
	I0907 00:51:11.134216   46768 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key.05d6cdfc
	I0907 00:51:11.134252   46768 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key
	I0907 00:51:11.134393   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:11.134436   46768 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:11.134455   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:11.134488   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:11.134512   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:11.134534   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:11.134576   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.135184   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:11.161212   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:11.185797   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:11.209084   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:11.233001   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:11.255646   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:11.278323   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:11.301913   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:11.324316   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:11.349950   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:11.375738   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:11.402735   46768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:11.421372   46768 ssh_runner.go:195] Run: openssl version
	I0907 00:51:11.426855   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:11.436392   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440778   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.446374   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:11.455773   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:11.465073   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470197   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470243   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.475740   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:11.484993   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:11.494256   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498766   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.504037   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:11.512896   46768 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:11.517289   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:11.523115   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:11.528780   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:11.534330   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:11.539777   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:11.545439   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:11.550878   46768 kubeadm.go:404] StartCluster: {Name:no-preload-321164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:11.550968   46768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:11.551014   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:11.582341   46768 cri.go:89] found id: ""
	I0907 00:51:11.582409   46768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:11.591760   46768 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:11.591782   46768 kubeadm.go:636] restartCluster start
	I0907 00:51:11.591825   46768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:11.600241   46768 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.601258   46768 kubeconfig.go:92] found "no-preload-321164" server: "https://192.168.61.125:8443"
	I0907 00:51:11.603775   46768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:11.612221   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.612268   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.622330   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.622348   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.622392   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.632889   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.133626   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.133726   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.144713   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.633065   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.633145   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.648698   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.133304   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.133401   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.146822   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.633303   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.633374   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.648566   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.132966   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.133041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.147847   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.633090   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.633177   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.648893   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.133388   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.133465   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.149162   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.762623   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Start
	I0907 00:51:11.762823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring networks are active...
	I0907 00:51:11.763580   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network default is active
	I0907 00:51:11.764022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network mk-default-k8s-diff-port-773466 is active
	I0907 00:51:11.764494   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Getting domain xml...
	I0907 00:51:11.765139   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Creating domain...
	I0907 00:51:13.032555   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting to get IP...
	I0907 00:51:13.033441   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.033855   47907 retry.go:31] will retry after 214.721735ms: waiting for machine to come up
	I0907 00:51:13.250549   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251062   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251090   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.251001   47907 retry.go:31] will retry after 260.305773ms: waiting for machine to come up
	I0907 00:51:13.512603   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513144   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513175   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.513088   47907 retry.go:31] will retry after 293.213959ms: waiting for machine to come up
	I0907 00:51:13.807649   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.808128   47907 retry.go:31] will retry after 455.70029ms: waiting for machine to come up
	I0907 00:51:14.265914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266412   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266444   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:14.266367   47907 retry.go:31] will retry after 761.48199ms: waiting for machine to come up
	I0907 00:51:15.029446   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029916   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029950   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.029868   47907 retry.go:31] will retry after 889.947924ms: waiting for machine to come up
	I0907 00:51:15.079606   46833 crio.go:444] Took 1.872243 seconds to copy over tarball
	I0907 00:51:15.079679   46833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:18.068521   46833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988813422s)
	I0907 00:51:18.068547   46833 crio.go:451] Took 2.988919 seconds to extract the tarball
	I0907 00:51:18.068557   46833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:18.109973   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:18.154472   46833 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:18.154493   46833 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:18.154568   46833 ssh_runner.go:195] Run: crio config
	I0907 00:51:18.216517   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:18.216549   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:18.216571   46833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:18.216597   46833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-546209 NodeName:embed-certs-546209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:18.216747   46833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-546209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:18.216815   46833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-546209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:18.216863   46833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:18.230093   46833 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:18.230164   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:18.239087   46833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0907 00:51:18.256683   46833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:18.274030   46833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0907 00:51:18.294711   46833 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:18.299655   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:18.312980   46833 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209 for IP: 192.168.50.242
	I0907 00:51:18.313028   46833 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:18.313215   46833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:18.313283   46833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:18.313382   46833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/client.key
	I0907 00:51:18.313446   46833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key.5dc0f9a1
	I0907 00:51:18.313495   46833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key
	I0907 00:51:18.313607   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:18.313633   46833 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:18.313640   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:18.313665   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:18.313688   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:18.313709   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:18.313747   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:18.314356   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:18.344731   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:18.368872   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:18.397110   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:51:18.424441   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:18.452807   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:18.481018   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:18.509317   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:18.541038   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:18.565984   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:18.590863   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:18.614083   46833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:18.631295   46833 ssh_runner.go:195] Run: openssl version
	I0907 00:51:18.637229   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:18.651999   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.656999   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.657052   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.663109   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:18.675826   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:18.688358   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693281   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693331   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.699223   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:18.711511   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:18.724096   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729285   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729338   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.735410   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:18.747948   46833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:18.753003   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:18.759519   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:18.765813   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:18.772328   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:18.778699   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:18.785207   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:18.791515   46833 kubeadm.go:404] StartCluster: {Name:embed-certs-546209 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:18.791636   46833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:18.791719   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:18.831468   46833 cri.go:89] found id: ""
	I0907 00:51:18.831544   46833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:18.843779   46833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:18.843805   46833 kubeadm.go:636] restartCluster start
	I0907 00:51:18.843863   46833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:18.854604   46833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.855622   46833 kubeconfig.go:92] found "embed-certs-546209" server: "https://192.168.50.242:8443"
	I0907 00:51:18.857679   46833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:18.867583   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.867640   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.879567   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.879587   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.879634   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.891098   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.391839   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.391932   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.405078   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.633045   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.633128   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.644837   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.133842   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.133926   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.148072   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.633750   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.633828   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.648961   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.133669   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.133757   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.148342   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.633967   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.634076   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.649188   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.133815   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.133917   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.148350   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.633962   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.634047   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.649195   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.133733   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.133821   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.145109   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.633727   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.633808   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.645272   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.133921   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.133990   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.145494   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.920914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921395   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921430   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.921325   47907 retry.go:31] will retry after 952.422054ms: waiting for machine to come up
	I0907 00:51:16.875800   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876319   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876356   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:16.876272   47907 retry.go:31] will retry after 1.481584671s: waiting for machine to come up
	I0907 00:51:18.359815   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360308   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:18.360185   47907 retry.go:31] will retry after 1.355619716s: waiting for machine to come up
	I0907 00:51:19.717081   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717458   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717485   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:19.717419   47907 retry.go:31] will retry after 1.450172017s: waiting for machine to come up
	I0907 00:51:19.892019   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.038702   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.051318   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.391913   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.404956   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.891503   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.891594   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.904473   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.391486   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.391563   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.405726   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.891257   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.891337   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.905422   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.392028   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.392137   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.408621   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.891926   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.892033   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.906116   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.391605   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.391684   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.404834   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.891360   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.891447   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.908340   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:24.391916   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.392007   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.408806   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.633099   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.633200   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.644181   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.133144   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.133227   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.144139   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.612786   46768 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:21.612814   46768 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:21.612826   46768 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:21.612881   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:21.643142   46768 cri.go:89] found id: ""
	I0907 00:51:21.643216   46768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:21.658226   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:21.666895   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:21.666960   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675285   46768 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675317   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:21.817664   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.473084   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.670341   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.752820   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.842789   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:22.842868   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:22.861783   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.383385   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.884041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.384065   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.884077   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:21.168650   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169014   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169037   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:21.168966   47907 retry.go:31] will retry after 2.876055316s: waiting for machine to come up
	I0907 00:51:24.046598   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.046990   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.047020   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:24.046937   47907 retry.go:31] will retry after 2.837607521s: waiting for machine to come up
	I0907 00:51:24.891477   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.891564   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.908102   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.391625   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.391704   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.408399   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.892052   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.892166   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.909608   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.391529   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.391610   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.407459   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.891930   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.891994   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.908217   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.391898   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.404370   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.891918   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.892001   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.904988   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.391570   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:28.391650   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:28.403968   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.868619   46833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:28.868666   46833 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:28.868679   46833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:28.868736   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:28.907258   46833 cri.go:89] found id: ""
	I0907 00:51:28.907332   46833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:28.926539   46833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:28.938760   46833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:28.938837   46833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950550   46833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950576   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:29.092484   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:25.383423   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:25.413853   46768 api_server.go:72] duration metric: took 2.571070768s to wait for apiserver process to appear ...
	I0907 00:51:25.413877   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:25.413895   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.168577   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.168617   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.168629   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.228753   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.228785   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.729501   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.735318   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:29.735345   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:26.886341   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886797   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886819   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:26.886742   47907 retry.go:31] will retry after 3.776269501s: waiting for machine to come up
	I0907 00:51:30.665170   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.665736   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Found IP for machine: 192.168.39.96
	I0907 00:51:30.665770   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserving static IP address...
	I0907 00:51:30.665788   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has current primary IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.666183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.666226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | skip adding static IP to network mk-default-k8s-diff-port-773466 - found existing host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"}
	I0907 00:51:30.666245   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserved static IP address: 192.168.39.96
	I0907 00:51:30.666262   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for SSH to be available...
	I0907 00:51:30.666279   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Getting to WaitForSSH function...
	I0907 00:51:30.668591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.229871   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.240735   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:30.240764   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:30.729911   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.736989   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:51:30.746939   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:30.746964   46768 api_server.go:131] duration metric: took 5.333080985s to wait for apiserver health ...
	I0907 00:51:30.746973   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:30.746979   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:30.748709   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:32.716941   46354 start.go:369] acquired machines lock for "old-k8s-version-940806" in 56.927952192s
	I0907 00:51:32.717002   46354 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:32.717014   46354 fix.go:54] fixHost starting: 
	I0907 00:51:32.717431   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:32.717466   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:32.735021   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I0907 00:51:32.735485   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:32.736057   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:51:32.736083   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:32.736457   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:32.736713   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:32.736903   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:51:32.738719   46354 fix.go:102] recreateIfNeeded on old-k8s-version-940806: state=Stopped err=<nil>
	I0907 00:51:32.738743   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	W0907 00:51:32.738924   46354 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:32.740721   46354 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-940806" ...
	I0907 00:51:32.742202   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Start
	I0907 00:51:32.742362   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring networks are active...
	I0907 00:51:32.743087   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network default is active
	I0907 00:51:32.743499   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network mk-old-k8s-version-940806 is active
	I0907 00:51:32.743863   46354 main.go:141] libmachine: (old-k8s-version-940806) Getting domain xml...
	I0907 00:51:32.744603   46354 main.go:141] libmachine: (old-k8s-version-940806) Creating domain...
	I0907 00:51:30.668969   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.670773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.670838   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH client type: external
	I0907 00:51:30.670876   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa (-rw-------)
	I0907 00:51:30.670918   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:30.670934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | About to run SSH command:
	I0907 00:51:30.670947   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | exit 0
	I0907 00:51:30.770939   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:30.771333   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetConfigRaw
	I0907 00:51:30.772100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:30.775128   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775616   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.775654   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775923   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:51:30.776161   47297 machine.go:88] provisioning docker machine ...
	I0907 00:51:30.776180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:30.776399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776597   47297 buildroot.go:166] provisioning hostname "default-k8s-diff-port-773466"
	I0907 00:51:30.776618   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776805   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.779367   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.779761   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.779793   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.780022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.780238   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780534   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.780687   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.781088   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.781102   47297 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-773466 && echo "default-k8s-diff-port-773466" | sudo tee /etc/hostname
	I0907 00:51:30.932287   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-773466
	
	I0907 00:51:30.932320   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.935703   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936111   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.936146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936324   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.936647   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.936851   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.937054   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.937266   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.937890   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.937932   47297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-773466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-773466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-773466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:31.091619   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:31.091654   47297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:31.091707   47297 buildroot.go:174] setting up certificates
	I0907 00:51:31.091724   47297 provision.go:83] configureAuth start
	I0907 00:51:31.091746   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:31.092066   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:31.095183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095670   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.095710   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095861   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.098597   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.098887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.098962   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.099205   47297 provision.go:138] copyHostCerts
	I0907 00:51:31.099275   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:31.099291   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:31.099362   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:31.099516   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:31.099531   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:31.099563   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:31.099658   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:31.099671   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:31.099700   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:31.099807   47297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-773466 san=[192.168.39.96 192.168.39.96 localhost 127.0.0.1 minikube default-k8s-diff-port-773466]
	I0907 00:51:31.793599   47297 provision.go:172] copyRemoteCerts
	I0907 00:51:31.793653   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:31.793676   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.796773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797153   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.797192   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797362   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:31.797578   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:31.797751   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:31.797865   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:31.903781   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:31.935908   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0907 00:51:31.967385   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:51:31.998542   47297 provision.go:86] duration metric: configureAuth took 906.744341ms
	I0907 00:51:31.998576   47297 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:31.998836   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:31.998941   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.002251   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.002747   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002996   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.003300   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003717   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.003996   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.004637   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.004662   47297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:32.413687   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:32.413765   47297 machine.go:91] provisioned docker machine in 1.637590059s
	I0907 00:51:32.413777   47297 start.go:300] post-start starting for "default-k8s-diff-port-773466" (driver="kvm2")
	I0907 00:51:32.413787   47297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:32.413823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.414183   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:32.414227   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.417432   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.417894   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.417954   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.418202   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.418371   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.418517   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.418625   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.523519   47297 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:32.528959   47297 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:32.528983   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:32.529050   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:32.529144   47297 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:32.529249   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:32.538827   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:32.569792   47297 start.go:303] post-start completed in 156.000078ms
	I0907 00:51:32.569819   47297 fix.go:56] fixHost completed within 20.830399155s
	I0907 00:51:32.569860   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.573180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573599   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.573653   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573846   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.574100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574292   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574470   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.574658   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.575266   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.575282   47297 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:32.716793   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047892.656226759
	
	I0907 00:51:32.716819   47297 fix.go:206] guest clock: 1694047892.656226759
	I0907 00:51:32.716829   47297 fix.go:219] Guest: 2023-09-07 00:51:32.656226759 +0000 UTC Remote: 2023-09-07 00:51:32.569839112 +0000 UTC m=+181.933138455 (delta=86.387647ms)
	I0907 00:51:32.716855   47297 fix.go:190] guest clock delta is within tolerance: 86.387647ms
	I0907 00:51:32.716868   47297 start.go:83] releasing machines lock for "default-k8s-diff-port-773466", held for 20.977496549s
	I0907 00:51:32.716900   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.717205   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:32.720353   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.720794   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.720825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.721001   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721675   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721767   47297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:32.721813   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.721925   47297 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:32.721951   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.724909   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725154   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725464   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725510   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725626   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725808   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.725825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725845   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725869   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725967   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726058   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.726164   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.726216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726352   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.845353   47297 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:32.851616   47297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:33.005642   47297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:33.013527   47297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:33.013603   47297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:33.033433   47297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:33.033467   47297 start.go:466] detecting cgroup driver to use...
	I0907 00:51:33.033538   47297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:33.055861   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:33.073405   47297 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:33.073477   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:33.090484   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:33.104735   47297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:33.245072   47297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:33.411559   47297 docker.go:212] disabling docker service ...
	I0907 00:51:33.411625   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:33.429768   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:33.446597   47297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:33.581915   47297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:33.704648   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:33.721447   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:33.740243   47297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:33.740330   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.750871   47297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:33.750937   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.761620   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.774350   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.787718   47297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:33.802740   47297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:33.814899   47297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:33.814975   47297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:33.832422   47297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:33.844513   47297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:34.020051   47297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:34.252339   47297 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:34.252415   47297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:34.258055   47297 start.go:534] Will wait 60s for crictl version
	I0907 00:51:34.258179   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:51:34.262511   47297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:34.304552   47297 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:34.304626   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.376009   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.448097   47297 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:29.972856   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.178016   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.291593   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.385791   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:30.385865   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.404991   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.926995   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.427043   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.927049   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.426422   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.927274   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.955713   46833 api_server.go:72] duration metric: took 2.569919035s to wait for apiserver process to appear ...
	I0907 00:51:32.955739   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:32.955757   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.956284   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:32.956316   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.957189   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:33.457905   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:30.750097   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:30.784742   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:30.828002   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:30.852490   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:30.852534   46768 system_pods.go:61] "coredns-5dd5756b68-6ndjc" [8f1f8224-b8b4-4fb6-8f6b-2f4a0fb18e17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:30.852547   46768 system_pods.go:61] "etcd-no-preload-321164" [c4b2427c-d882-4d29-af41-553961e5ee48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:30.852559   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [339ca32b-a5a1-474c-a5db-c35e7f87506d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:30.852569   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [36241c8a-13ce-4e68-887b-ed929258d688] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:30.852581   46768 system_pods.go:61] "kube-proxy-f7dm4" [69308cf3-c18e-4edb-b0ea-c7f34a51aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:30.852595   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [e9b14f0e-7789-4d1d-9a15-02c88d4a1e3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:30.852606   46768 system_pods.go:61] "metrics-server-57f55c9bc5-s95n2" [938af7b2-936b-495c-84c9-d580ae646926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:30.852622   46768 system_pods.go:61] "storage-provisioner" [70c690a6-a383-4b3f-9817-954056580009] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:30.852633   46768 system_pods.go:74] duration metric: took 24.608458ms to wait for pod list to return data ...
	I0907 00:51:30.852646   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:30.860785   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:30.860811   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:30.860821   46768 node_conditions.go:105] duration metric: took 8.167675ms to run NodePressure ...
	I0907 00:51:30.860837   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:31.343033   46768 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349908   46768 kubeadm.go:787] kubelet initialised
	I0907 00:51:31.349936   46768 kubeadm.go:788] duration metric: took 6.87538ms waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349944   46768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:31.366931   46768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:33.392559   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:34.449546   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:34.452803   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453196   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:34.453226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453551   47297 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:34.459166   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:34.475045   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:34.475159   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:34.525380   47297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:34.525495   47297 ssh_runner.go:195] Run: which lz4
	I0907 00:51:34.530921   47297 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:34.537992   47297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:34.538062   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:34.298412   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting to get IP...
	I0907 00:51:34.299510   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.300108   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.300166   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.300103   48085 retry.go:31] will retry after 237.599934ms: waiting for machine to come up
	I0907 00:51:34.539798   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.540306   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.540406   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.540348   48085 retry.go:31] will retry after 321.765824ms: waiting for machine to come up
	I0907 00:51:34.864120   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.864735   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.864761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.864698   48085 retry.go:31] will retry after 485.375139ms: waiting for machine to come up
	I0907 00:51:35.351583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.352142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.352174   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.352081   48085 retry.go:31] will retry after 490.428576ms: waiting for machine to come up
	I0907 00:51:35.844432   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.844896   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.844921   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.844821   48085 retry.go:31] will retry after 610.440599ms: waiting for machine to come up
	I0907 00:51:36.456988   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:36.457697   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:36.457720   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:36.457634   48085 retry.go:31] will retry after 704.547341ms: waiting for machine to come up
	I0907 00:51:37.163551   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.163973   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.164001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.163926   48085 retry.go:31] will retry after 825.931424ms: waiting for machine to come up
	I0907 00:51:37.991936   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.992550   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.992583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.992489   48085 retry.go:31] will retry after 952.175868ms: waiting for machine to come up
	I0907 00:51:37.065943   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.065973   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.065987   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.176178   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.176213   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.457739   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.464386   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.464423   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:37.958094   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.966530   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.966561   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:38.458170   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:38.465933   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:51:38.477109   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:38.477135   46833 api_server.go:131] duration metric: took 5.521389594s to wait for apiserver health ...
	I0907 00:51:38.477143   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:38.477149   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:38.478964   46833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:38.480383   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:38.509844   46833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:38.549403   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:38.571430   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:38.571472   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:38.571491   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:38.571503   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:38.571563   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:38.571575   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:38.571592   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:38.571602   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:38.571613   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:38.571626   46833 system_pods.go:74] duration metric: took 22.19998ms to wait for pod list to return data ...
	I0907 00:51:38.571637   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:38.581324   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:38.581361   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:38.581373   46833 node_conditions.go:105] duration metric: took 9.730463ms to run NodePressure ...
	I0907 00:51:38.581393   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:39.140602   46833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:39.147994   46833 kubeadm.go:787] kubelet initialised
	I0907 00:51:39.148025   46833 kubeadm.go:788] duration metric: took 7.397807ms waiting for restarted kubelet to initialise ...
	I0907 00:51:39.148034   46833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:39.157241   46833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.172898   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172935   46833 pod_ready.go:81] duration metric: took 15.665673ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.172947   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172958   46833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.180630   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180666   46833 pod_ready.go:81] duration metric: took 7.698054ms waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.180679   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180692   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.202626   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202658   46833 pod_ready.go:81] duration metric: took 21.956163ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.202671   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202699   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.210817   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210849   46833 pod_ready.go:81] duration metric: took 8.138129ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.210860   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210882   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.801924   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801951   46833 pod_ready.go:81] duration metric: took 591.060955ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.801963   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801970   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:35.403877   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.394774   46768 pod_ready.go:92] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:36.394823   46768 pod_ready.go:81] duration metric: took 5.027852065s waiting for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:36.394839   46768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:38.429614   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.550649   47297 crio.go:444] Took 2.019779 seconds to copy over tarball
	I0907 00:51:36.550726   47297 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:40.133828   47297 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.583074443s)
	I0907 00:51:40.133861   47297 crio.go:451] Took 3.583177 seconds to extract the tarball
	I0907 00:51:40.133872   47297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:40.177675   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:40.230574   47297 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:40.230594   47297 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:40.230654   47297 ssh_runner.go:195] Run: crio config
	I0907 00:51:40.296445   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:51:40.296473   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:40.296497   47297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:40.296519   47297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-773466 NodeName:default-k8s-diff-port-773466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:40.296709   47297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-773466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:40.296793   47297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-773466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0907 00:51:40.296850   47297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:40.307543   47297 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:40.307642   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:40.318841   47297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0907 00:51:40.337125   47297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:40.354910   47297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0907 00:51:40.375283   47297 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:40.380206   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:40.394943   47297 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466 for IP: 192.168.39.96
	I0907 00:51:40.394980   47297 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.395194   47297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:40.395231   47297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:40.395295   47297 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.key
	I0907 00:51:40.410649   47297 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key.e8bbde58
	I0907 00:51:40.410724   47297 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key
	I0907 00:51:40.410868   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:40.410904   47297 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:40.410916   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:40.410942   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:40.410963   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:40.410985   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:40.411038   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:40.411575   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:40.441079   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:51:40.465854   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:40.495221   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:40.521493   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:40.548227   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:40.574366   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:40.599116   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:40.624901   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:40.650606   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:40.690154   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690183   46833 pod_ready.go:81] duration metric: took 888.205223ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.690194   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690204   46833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:40.697723   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697750   46833 pod_ready.go:81] duration metric: took 7.538932ms waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.697761   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697773   46833 pod_ready.go:38] duration metric: took 1.549726748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:40.697793   46833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:51:40.709255   46833 ops.go:34] apiserver oom_adj: -16
	I0907 00:51:40.709281   46833 kubeadm.go:640] restartCluster took 21.865468537s
	I0907 00:51:40.709290   46833 kubeadm.go:406] StartCluster complete in 21.917781616s
	I0907 00:51:40.709309   46833 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.709403   46833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:51:40.712326   46833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.808025   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:51:40.808158   46833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:51:40.808236   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:40.808285   46833 addons.go:69] Setting metrics-server=true in profile "embed-certs-546209"
	I0907 00:51:40.808309   46833 addons.go:231] Setting addon metrics-server=true in "embed-certs-546209"
	W0907 00:51:40.808317   46833 addons.go:240] addon metrics-server should already be in state true
	I0907 00:51:40.808252   46833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-546209"
	I0907 00:51:40.808340   46833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-546209"
	W0907 00:51:40.808354   46833 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:51:40.808375   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808390   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808257   46833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-546209"
	I0907 00:51:40.808493   46833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-546209"
	I0907 00:51:40.809864   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.809936   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810411   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810477   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810518   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810526   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.827159   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0907 00:51:40.827608   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0907 00:51:40.827784   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828059   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828326   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828354   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828556   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828579   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828955   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829067   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829670   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.829715   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.829932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.831070   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0907 00:51:40.831543   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.832142   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.832161   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.832527   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.834743   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.834801   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.853510   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0907 00:51:40.854194   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0907 00:51:40.854261   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.854987   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855019   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.855102   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.855381   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.855745   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.855791   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855808   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.856430   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.856882   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.858468   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.154848   46833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:51:40.859116   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.300012   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:51:41.362259   46833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:41.362296   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:51:41.362332   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.460930   46833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.460961   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:51:41.460988   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.464836   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465151   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465419   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465455   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465590   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465621   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465764   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465979   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466055   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466196   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466276   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.466309   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.587470   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.594683   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:51:41.594709   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:51:41.621438   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:51:41.621471   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:51:41.664886   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.664910   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:51:41.691795   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.886942   46833 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.078877765s)
	I0907 00:51:41.887038   46833 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:51:41.898851   46833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-546209" context rescaled to 1 replicas
	I0907 00:51:41.898900   46833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:51:42.014441   46833 out.go:177] * Verifying Kubernetes components...
	I0907 00:51:38.946740   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:38.947268   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:38.947292   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:38.947211   48085 retry.go:31] will retry after 1.334104337s: waiting for machine to come up
	I0907 00:51:40.282730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:40.283209   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:40.283233   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:40.283168   48085 retry.go:31] will retry after 1.521256667s: waiting for machine to come up
	I0907 00:51:41.806681   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:41.807182   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:41.807211   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:41.807126   48085 retry.go:31] will retry after 1.907600342s: waiting for machine to come up
	I0907 00:51:42.132070   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:51:42.150876   46833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-546209"
	W0907 00:51:42.150905   46833 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:51:42.150935   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:42.151329   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.151357   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.172605   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0907 00:51:42.173122   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.173662   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.173709   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.174155   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.174813   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.174877   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.196701   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0907 00:51:42.197287   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.197859   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.197882   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.198246   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.198418   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:42.200558   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:42.200942   46833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:42.200954   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:51:42.200967   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:42.204259   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.204952   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:42.204975   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:42.205009   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.205139   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:42.205280   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:42.205405   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:42.377838   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:43.286666   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.699154782s)
	I0907 00:51:43.286720   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.286734   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.287148   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.287174   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.287190   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.287210   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.287220   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.288970   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.289008   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.289021   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.436691   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.744844788s)
	I0907 00:51:43.436717   46833 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.304610389s)
	I0907 00:51:43.436744   46833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:43.436758   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436775   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.436862   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05899604s)
	I0907 00:51:43.436883   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436893   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438856   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.438887   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438903   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438907   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438914   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438919   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438924   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438934   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439020   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.439206   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439219   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439231   46833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-546209"
	I0907 00:51:43.439266   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439277   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439290   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.439299   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439502   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439513   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.442917   46833 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0907 00:51:43.444226   46833 addons.go:502] enable addons completed in 2.636061813s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0907 00:51:40.924494   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:42.925582   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:40.679951   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:40.859542   47297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:40.881658   47297 ssh_runner.go:195] Run: openssl version
	I0907 00:51:40.888518   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:40.902200   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908038   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908106   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.914418   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:40.927511   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:40.941360   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947556   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947622   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.953780   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:40.966576   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:40.981447   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989719   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989779   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:41.000685   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:41.017936   47297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:41.023280   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:41.029915   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:41.038011   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:41.044570   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:41.052534   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:41.060580   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:41.068664   47297 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:41.068776   47297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:41.068897   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:41.111849   47297 cri.go:89] found id: ""
	I0907 00:51:41.111923   47297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:41.126171   47297 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:41.126193   47297 kubeadm.go:636] restartCluster start
	I0907 00:51:41.126249   47297 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:41.138401   47297 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.139882   47297 kubeconfig.go:92] found "default-k8s-diff-port-773466" server: "https://192.168.39.96:8444"
	I0907 00:51:41.142907   47297 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:41.154285   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.154346   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.168992   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.169012   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.169057   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.183283   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.683942   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.684036   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.701647   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.183800   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.183882   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.213176   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.683460   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.683550   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.701805   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.184099   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.184206   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.202359   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.683466   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.683541   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.697133   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.183663   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.183750   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.201236   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.684320   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.684411   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.698198   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:45.183451   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.183533   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.197529   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.716005   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:43.716632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:43.716668   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:43.716570   48085 retry.go:31] will retry after 3.526983217s: waiting for machine to come up
	I0907 00:51:47.245213   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:47.245615   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:47.245645   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:47.245561   48085 retry.go:31] will retry after 3.453934877s: waiting for machine to come up
	I0907 00:51:45.450760   46833 node_ready.go:58] node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:47.949024   46833 node_ready.go:49] node "embed-certs-546209" has status "Ready":"True"
	I0907 00:51:47.949053   46833 node_ready.go:38] duration metric: took 4.512298071s waiting for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:47.949063   46833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:47.956755   46833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964323   46833 pod_ready.go:92] pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:47.964345   46833 pod_ready.go:81] duration metric: took 7.56298ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964356   46833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425347   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.425370   46768 pod_ready.go:81] duration metric: took 9.030524984s waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425380   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432508   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.432531   46768 pod_ready.go:81] duration metric: took 7.145112ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432545   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441245   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.441265   46768 pod_ready.go:81] duration metric: took 8.713177ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441275   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446603   46768 pod_ready.go:92] pod "kube-proxy-f7dm4" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.446627   46768 pod_ready.go:81] duration metric: took 5.346628ms waiting for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446641   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453061   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.453091   46768 pod_ready.go:81] duration metric: took 6.442457ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453104   46768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.730093   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:45.684191   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.684287   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.702020   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.183587   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.183697   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.201390   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.683442   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.683519   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.699015   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.183908   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.183998   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.196617   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.683929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.683991   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.696499   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.183929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.184000   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.197425   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.683932   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.684019   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.696986   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.184149   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.184224   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.197363   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.684066   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.684152   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.697853   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.183372   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.183490   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.195818   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.700500   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:50.700920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:50.700939   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:50.700882   48085 retry.go:31] will retry after 4.6319983s: waiting for machine to come up
	I0907 00:51:49.984505   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:51.987061   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:53.485331   46833 pod_ready.go:92] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.485356   46833 pod_ready.go:81] duration metric: took 5.520993929s waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.485368   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491351   46833 pod_ready.go:92] pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.491371   46833 pod_ready.go:81] duration metric: took 5.996687ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491387   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496425   46833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.496448   46833 pod_ready.go:81] duration metric: took 5.054087ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496460   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504963   46833 pod_ready.go:92] pod "kube-proxy-47255" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.504982   46833 pod_ready.go:81] duration metric: took 8.515814ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504990   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550180   46833 pod_ready.go:92] pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.550208   46833 pod_ready.go:81] duration metric: took 45.211992ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550222   46833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:50.229069   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:52.233340   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:54.728824   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:50.683740   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.683806   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.695528   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:51.154940   47297 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:51.154990   47297 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:51.155002   47297 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:51.155052   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:51.190293   47297 cri.go:89] found id: ""
	I0907 00:51:51.190351   47297 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:51.207237   47297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:51.216623   47297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:51.216671   47297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226376   47297 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226399   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.352763   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.879625   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.090367   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.169714   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.258757   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:52.258861   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.274881   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.799083   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.298600   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.798807   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.299419   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.798660   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.824175   47297 api_server.go:72] duration metric: took 2.565415526s to wait for apiserver process to appear ...
	I0907 00:51:54.824203   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:54.824222   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:55.335922   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336311   46354 main.go:141] libmachine: (old-k8s-version-940806) Found IP for machine: 192.168.83.245
	I0907 00:51:55.336325   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserving static IP address...
	I0907 00:51:55.336336   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has current primary IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336816   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.336872   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserved static IP address: 192.168.83.245
	I0907 00:51:55.336893   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | skip adding static IP to network mk-old-k8s-version-940806 - found existing host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"}
	I0907 00:51:55.336909   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting for SSH to be available...
	I0907 00:51:55.336919   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Getting to WaitForSSH function...
	I0907 00:51:55.339323   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.339768   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339880   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH client type: external
	I0907 00:51:55.339907   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa (-rw-------)
	I0907 00:51:55.339946   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:55.339964   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | About to run SSH command:
	I0907 00:51:55.340001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | exit 0
	I0907 00:51:55.483023   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:55.483362   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetConfigRaw
	I0907 00:51:55.484121   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.487091   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487590   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.487621   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487863   46354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:51:55.488067   46354 machine.go:88] provisioning docker machine ...
	I0907 00:51:55.488088   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:55.488332   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488525   46354 buildroot.go:166] provisioning hostname "old-k8s-version-940806"
	I0907 00:51:55.488551   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488707   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.491136   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491567   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.491600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491818   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.491950   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492058   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492133   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.492237   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.492685   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.492705   46354 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-940806 && echo "old-k8s-version-940806" | sudo tee /etc/hostname
	I0907 00:51:55.648589   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-940806
	
	I0907 00:51:55.648628   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.651624   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652046   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.652094   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652282   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.652472   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652654   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652813   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.652977   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.653628   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.653657   46354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-940806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-940806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-940806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:55.805542   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:55.805573   46354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:55.805607   46354 buildroot.go:174] setting up certificates
	I0907 00:51:55.805617   46354 provision.go:83] configureAuth start
	I0907 00:51:55.805629   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.805907   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.808800   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.809175   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809299   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.811385   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811785   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.811812   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811980   46354 provision.go:138] copyHostCerts
	I0907 00:51:55.812089   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:55.812104   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:55.812172   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:55.812287   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:55.812297   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:55.812321   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:55.812418   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:55.812427   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:55.812463   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:55.812538   46354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-940806 san=[192.168.83.245 192.168.83.245 localhost 127.0.0.1 minikube old-k8s-version-940806]
	I0907 00:51:55.920274   46354 provision.go:172] copyRemoteCerts
	I0907 00:51:55.920327   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:55.920348   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.923183   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923599   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.923632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923816   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.924011   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.924174   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.924335   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.020317   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:56.048299   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:51:56.075483   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:56.101118   46354 provision.go:86] duration metric: configureAuth took 295.488336ms
	I0907 00:51:56.101150   46354 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:56.101338   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:51:56.101407   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.104235   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.104640   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104878   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.105093   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105306   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105495   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.105668   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.106199   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.106217   46354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:56.435571   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:56.435644   46354 machine.go:91] provisioned docker machine in 947.562946ms
	I0907 00:51:56.435662   46354 start.go:300] post-start starting for "old-k8s-version-940806" (driver="kvm2")
	I0907 00:51:56.435679   46354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:56.435712   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.436041   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:56.436083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.439187   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439537   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.439563   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439888   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.440116   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.440285   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.440427   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.542162   46354 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:56.546357   46354 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:56.546375   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:56.546435   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:56.546511   46354 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:56.546648   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:56.556125   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:56.577844   46354 start.go:303] post-start completed in 142.166343ms
	I0907 00:51:56.577874   46354 fix.go:56] fixHost completed within 23.860860531s
	I0907 00:51:56.577898   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.580726   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581062   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.581090   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581221   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.581540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581742   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.582113   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.582532   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.582553   46354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:56.715584   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047916.695896692
	
	I0907 00:51:56.715607   46354 fix.go:206] guest clock: 1694047916.695896692
	I0907 00:51:56.715615   46354 fix.go:219] Guest: 2023-09-07 00:51:56.695896692 +0000 UTC Remote: 2023-09-07 00:51:56.57787864 +0000 UTC m=+363.381197654 (delta=118.018052ms)
	I0907 00:51:56.715632   46354 fix.go:190] guest clock delta is within tolerance: 118.018052ms
	I0907 00:51:56.715639   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 23.998669865s
	I0907 00:51:56.715658   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.715909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:56.718637   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.718992   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.719030   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.719203   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719646   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719852   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719935   46354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:56.719980   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.720050   46354 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:56.720068   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.722463   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722752   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722809   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.722850   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723041   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723208   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723241   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.723282   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723394   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723406   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723599   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.723632   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723797   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723956   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.835700   46354 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:56.841554   46354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:56.988658   46354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:56.995421   46354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:56.995495   46354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:57.011588   46354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:57.011608   46354 start.go:466] detecting cgroup driver to use...
	I0907 00:51:57.011669   46354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:57.029889   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:57.043942   46354 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:57.044002   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:57.056653   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:57.069205   46354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:57.184510   46354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:57.323399   46354 docker.go:212] disabling docker service ...
	I0907 00:51:57.323477   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:57.336506   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:57.348657   46354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:57.464450   46354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:57.577763   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:57.590934   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:57.609445   46354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:51:57.609500   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.619112   46354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:57.619173   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.629272   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.638702   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.648720   46354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:57.659046   46354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:57.667895   46354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:57.667971   46354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:57.681673   46354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:57.690907   46354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:57.801113   46354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:57.978349   46354 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:57.978432   46354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:57.983665   46354 start.go:534] Will wait 60s for crictl version
	I0907 00:51:57.983714   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:51:57.988244   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:58.019548   46354 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:58.019616   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.068229   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.118554   46354 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0907 00:51:58.120322   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:58.122944   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123321   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:58.123377   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123569   46354 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:58.128115   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:58.140862   46354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0907 00:51:58.140933   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:58.182745   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:51:58.182829   46354 ssh_runner.go:195] Run: which lz4
	I0907 00:51:58.188491   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:58.193202   46354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:58.193237   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0907 00:51:55.862451   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.363582   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.511655   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.511686   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:58.511699   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:58.549405   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.549442   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:59.050120   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.057915   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.057946   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:59.550150   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.559928   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.559970   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:52:00.050535   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:52:00.060556   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:52:00.069872   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:52:00.069898   47297 api_server.go:131] duration metric: took 5.245689478s to wait for apiserver health ...
	I0907 00:52:00.069906   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:52:00.069911   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:00.071700   47297 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:56.730172   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.731973   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:00.073858   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:00.098341   47297 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:00.120355   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:00.137820   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:52:00.137936   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:52:00.137967   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:52:00.137989   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:52:00.138007   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:52:00.138018   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:52:00.138032   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:52:00.138045   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:52:00.138058   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:52:00.138069   47297 system_pods.go:74] duration metric: took 17.695163ms to wait for pod list to return data ...
	I0907 00:52:00.138082   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:00.145755   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:00.145790   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:00.145803   47297 node_conditions.go:105] duration metric: took 7.711411ms to run NodePressure ...
	I0907 00:52:00.145825   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:00.468823   47297 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476107   47297 kubeadm.go:787] kubelet initialised
	I0907 00:52:00.476130   47297 kubeadm.go:788] duration metric: took 7.282541ms waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476138   47297 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:00.483366   47297 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.495045   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495072   47297 pod_ready.go:81] duration metric: took 11.633116ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.495083   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495092   47297 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.500465   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500488   47297 pod_ready.go:81] duration metric: took 5.386997ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.500498   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500504   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.507318   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507392   47297 pod_ready.go:81] duration metric: took 6.878563ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.507416   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507436   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.527784   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527820   47297 pod_ready.go:81] duration metric: took 20.36412ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.527833   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527844   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.936895   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936926   47297 pod_ready.go:81] duration metric: took 409.073374ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.936938   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936947   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.325746   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325777   47297 pod_ready.go:81] duration metric: took 388.819699ms waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.325787   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325798   47297 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.725791   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725828   47297 pod_ready.go:81] duration metric: took 400.019773ms waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.725840   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725852   47297 pod_ready.go:38] duration metric: took 1.249702286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:01.725871   47297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:52:01.742792   47297 ops.go:34] apiserver oom_adj: -16
	I0907 00:52:01.742816   47297 kubeadm.go:640] restartCluster took 20.616616394s
	I0907 00:52:01.742825   47297 kubeadm.go:406] StartCluster complete in 20.674170679s
	I0907 00:52:01.742843   47297 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.742936   47297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:52:01.744735   47297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.744998   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:52:01.745113   47297 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:52:01.745212   47297 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745218   47297 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745232   47297 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745240   47297 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:52:01.745232   47297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-773466"
	I0907 00:52:01.745268   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:52:01.745301   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745248   47297 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745432   47297 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745442   47297 addons.go:240] addon metrics-server should already be in state true
	I0907 00:52:01.745489   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745709   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745718   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745753   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745813   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745895   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745930   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.755156   47297 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-773466" context rescaled to 1 replicas
	I0907 00:52:01.755193   47297 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:52:01.757452   47297 out.go:177] * Verifying Kubernetes components...
	I0907 00:52:01.759076   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:52:01.763067   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0907 00:52:01.763578   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.764125   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.764147   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.764483   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.764668   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.764804   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0907 00:52:01.765385   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.765972   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.765988   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.766336   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.768468   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0907 00:52:01.768952   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.768985   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.769339   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.769827   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.769860   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.770129   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.770612   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.770641   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.782323   47297 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.782353   47297 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:52:01.782387   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.782822   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.782858   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.788535   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0907 00:52:01.789169   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.789826   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.789845   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.790158   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0907 00:52:01.790340   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.790544   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.790616   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.791036   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.791055   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.791552   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.791726   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.793270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.796517   47297 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:52:01.794011   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.798239   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:52:01.798266   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:52:01.798291   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800176   47297 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:59.928894   46354 crio.go:444] Took 1.740438 seconds to copy over tarball
	I0907 00:51:59.928974   46354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:52:03.105945   46354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.176929999s)
	I0907 00:52:03.105977   46354 crio.go:451] Took 3.177055 seconds to extract the tarball
	I0907 00:52:03.105987   46354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:52:03.150092   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:52:03.193423   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:52:03.193450   46354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:52:03.193525   46354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.193544   46354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.193564   46354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.193730   46354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.193799   46354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.193802   46354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:52:03.193829   46354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.193736   46354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.194948   46354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.195017   46354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.194949   46354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.195642   46354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.195763   46354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.195814   46354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.195843   46354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:52:03.195874   46354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:01.801952   47297 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.801969   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:52:01.801989   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800897   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0907 00:52:01.801662   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802261   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.802286   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802332   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.802683   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.802922   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.802961   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.803124   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.804246   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.804272   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.804654   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.804870   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805283   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.805314   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805418   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.805448   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.805541   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.805723   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.805889   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.806052   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.822423   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0907 00:52:01.822847   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.823441   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.823459   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.823843   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.824036   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.825740   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.826032   47297 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:01.826051   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:52:01.826076   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.829041   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829284   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.829310   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829407   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.829591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.829712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.830194   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.956646   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:52:01.956669   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:52:01.974183   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.978309   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:02.048672   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:52:02.048708   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:52:02.088069   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:02.088099   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:52:02.142271   47297 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:02.142668   47297 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:52:02.197788   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:03.587076   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.612851341s)
	I0907 00:52:03.587130   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587147   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608805294s)
	I0907 00:52:03.587182   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587210   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587452   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587493   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587514   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587525   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587535   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587751   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587765   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587892   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587905   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587925   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587935   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588252   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.588277   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588285   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.588297   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.588305   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588543   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588555   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648373   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450538249s)
	I0907 00:52:03.648433   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648449   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.648789   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.648824   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.648833   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648848   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648858   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.649118   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.649137   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.649153   47297 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-773466"
	I0907 00:52:03.834785   47297 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:52:00.858996   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:02.861983   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:01.228807   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:03.229017   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:04.154749   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:04.260530   47297 addons.go:502] enable addons completed in 2.51536834s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:52:03.398538   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.480702   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.482201   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.482206   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0907 00:52:03.482815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.484155   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.484815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.698892   46354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0907 00:52:03.698936   46354 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.698938   46354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0907 00:52:03.698965   46354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0907 00:52:03.699028   46354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.698975   46354 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0907 00:52:03.698982   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699069   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699084   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.703734   46354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0907 00:52:03.703764   46354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.703796   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729259   46354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0907 00:52:03.729295   46354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.729331   46354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0907 00:52:03.729366   46354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.729373   46354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0907 00:52:03.729394   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.729398   46354 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.729404   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729336   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729441   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729491   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.729519   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0907 00:52:03.729601   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.791169   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0907 00:52:03.814632   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0907 00:52:03.814660   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.814689   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.814747   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:52:03.814799   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.814839   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0907 00:52:03.814841   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876039   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0907 00:52:03.876095   46354 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0907 00:52:03.876082   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0907 00:52:03.876114   46354 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876153   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0907 00:52:03.876158   46354 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0907 00:52:04.549426   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:05.733437   46354 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.85724297s)
	I0907 00:52:05.733479   46354 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0907 00:52:05.733519   46354 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.184052604s)
	I0907 00:52:05.733568   46354 cache_images.go:92] LoadImages completed in 2.540103614s
	W0907 00:52:05.733639   46354 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0907 00:52:05.733723   46354 ssh_runner.go:195] Run: crio config
	I0907 00:52:05.795752   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:05.795780   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:05.795801   46354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:52:05.795824   46354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-940806 NodeName:old-k8s-version-940806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0907 00:52:05.795975   46354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-940806"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-940806
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.245:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:52:05.796074   46354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-940806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:52:05.796135   46354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0907 00:52:05.807772   46354 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:52:05.807864   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:52:05.818185   46354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0907 00:52:05.835526   46354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:52:05.853219   46354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0907 00:52:05.873248   46354 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I0907 00:52:05.877640   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:52:05.890975   46354 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806 for IP: 192.168.83.245
	I0907 00:52:05.891009   46354 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:05.891171   46354 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:52:05.891226   46354 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:52:05.891327   46354 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.key
	I0907 00:52:05.891407   46354 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key.8de8e89b
	I0907 00:52:05.891459   46354 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key
	I0907 00:52:05.891667   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:52:05.891713   46354 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:52:05.891729   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:52:05.891766   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:52:05.891801   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:52:05.891836   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:52:05.891913   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:52:05.892547   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:52:05.917196   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:52:05.942387   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:52:05.965551   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:52:05.987658   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:52:06.012449   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:52:06.037055   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:52:06.061051   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:52:06.085002   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:52:06.109132   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:52:06.132091   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:52:06.155215   46354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:52:06.173122   46354 ssh_runner.go:195] Run: openssl version
	I0907 00:52:06.178736   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:52:06.189991   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194548   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194596   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.200538   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:52:06.212151   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:52:06.224356   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.229976   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.230037   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.236389   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:52:06.248369   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:52:06.259325   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264451   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264514   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.270564   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:52:06.282506   46354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:52:06.287280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:52:06.293280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:52:06.299272   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:52:06.305342   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:52:06.311194   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:52:06.317634   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:52:06.323437   46354 kubeadm.go:404] StartCluster: {Name:old-k8s-version-940806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:52:06.323591   46354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:52:06.323668   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:06.358285   46354 cri.go:89] found id: ""
	I0907 00:52:06.358357   46354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:52:06.368975   46354 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:52:06.368997   46354 kubeadm.go:636] restartCluster start
	I0907 00:52:06.369060   46354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:52:06.379841   46354 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.380906   46354 kubeconfig.go:92] found "old-k8s-version-940806" server: "https://192.168.83.245:8443"
	I0907 00:52:06.383428   46354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:52:06.393862   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.393912   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.406922   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.406947   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.406995   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.419930   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.920685   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.920763   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.934327   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.420551   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.420652   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.438377   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.920500   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.920598   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.936835   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:05.363807   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.869141   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:05.229666   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.729895   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:09.731464   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:06.656552   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:09.155326   47297 node_ready.go:49] node "default-k8s-diff-port-773466" has status "Ready":"True"
	I0907 00:52:09.155347   47297 node_ready.go:38] duration metric: took 7.013040488s waiting for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:09.155355   47297 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:09.164225   47297 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170406   47297 pod_ready.go:92] pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.170437   47297 pod_ready.go:81] duration metric: took 6.189088ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170450   47297 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178363   47297 pod_ready.go:92] pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.178390   47297 pod_ready.go:81] duration metric: took 7.932283ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178403   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184875   47297 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.184891   47297 pod_ready.go:81] duration metric: took 6.482032ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184900   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192246   47297 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.192265   47297 pod_ready.go:81] duration metric: took 7.359919ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192274   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556032   47297 pod_ready.go:92] pod "kube-proxy-5bh7n" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.556064   47297 pod_ready.go:81] duration metric: took 363.783194ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556077   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:08.420749   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.420813   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.434111   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:08.920795   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.920891   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.934515   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.420076   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.420167   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.433668   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.920090   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.920185   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.934602   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.420086   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.420186   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.434617   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.920124   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.920196   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.933372   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.420990   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.421072   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.435087   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.920579   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.920653   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.933614   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.420100   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.420192   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.434919   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.920816   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.920911   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.934364   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.357508   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.357966   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.358965   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.227826   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.228106   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:11.862581   47297 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.363573   47297 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:12.363593   47297 pod_ready.go:81] duration metric: took 2.807509276s waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:12.363602   47297 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:14.763624   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:13.420355   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.420427   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.434047   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:13.920675   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.920757   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.933725   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.420169   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.420244   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.433012   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.920490   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.920603   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.934208   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.420724   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.420807   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.433542   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.920040   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.920114   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.933104   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:16.394845   46354 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:52:16.394878   46354 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:52:16.394891   46354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:52:16.394939   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:16.430965   46354 cri.go:89] found id: ""
	I0907 00:52:16.431029   46354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:52:16.449241   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:52:16.459891   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:52:16.459973   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470006   46354 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470033   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:16.591111   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.262647   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.481491   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.601432   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.722907   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:52:17.723000   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:17.735327   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:16.360886   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.860619   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:16.230019   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.230274   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:17.262772   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:19.264986   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.254002   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:18.753686   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.253956   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.290590   46354 api_server.go:72] duration metric: took 1.567681708s to wait for apiserver process to appear ...
	I0907 00:52:19.290614   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:52:19.290632   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291177   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.291217   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291691   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.792323   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:21.357716   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:23.358355   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:20.728569   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:22.730042   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:21.763571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.264990   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.793514   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0907 00:52:24.793568   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:24.939397   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:52:24.939429   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:52:25.292624   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.350968   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.351004   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:25.792573   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.799666   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.799697   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:26.292258   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:26.301200   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:52:26.313982   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:52:26.314007   46354 api_server.go:131] duration metric: took 7.023387143s to wait for apiserver health ...
	I0907 00:52:26.314016   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:26.314021   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:26.316011   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:52:26.317496   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:26.335726   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:26.373988   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:26.393836   46354 system_pods.go:59] 7 kube-system pods found
	I0907 00:52:26.393861   46354 system_pods.go:61] "coredns-5644d7b6d9-56l68" [ab956d84-2998-42a4-b9ed-b71bc43c9730] Running
	I0907 00:52:26.393866   46354 system_pods.go:61] "etcd-old-k8s-version-940806" [6234bc4e-66d0-4fb6-8631-b45ee56b774c] Running
	I0907 00:52:26.393870   46354 system_pods.go:61] "kube-apiserver-old-k8s-version-940806" [303d2368-1964-4bdb-9d46-91602d6c52b4] Running
	I0907 00:52:26.393875   46354 system_pods.go:61] "kube-controller-manager-old-k8s-version-940806" [7a193f1e-8650-453b-bfa5-d4af3a8bfbc3] Running
	I0907 00:52:26.393878   46354 system_pods.go:61] "kube-proxy-2d8pb" [1689f3e9-0487-422e-a450-9c96595cea00] Running
	I0907 00:52:26.393882   46354 system_pods.go:61] "kube-scheduler-old-k8s-version-940806" [cbd69cd2-3fc6-418b-aa4f-ef19b1b903e1] Running
	I0907 00:52:26.393886   46354 system_pods.go:61] "storage-provisioner" [f313e63f-6c39-4b81-86d1-8054fd6af338] Running
	I0907 00:52:26.393891   46354 system_pods.go:74] duration metric: took 19.879283ms to wait for pod list to return data ...
	I0907 00:52:26.393900   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:26.401474   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:26.401502   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:26.401512   46354 node_conditions.go:105] duration metric: took 7.606706ms to run NodePressure ...
	I0907 00:52:26.401529   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:26.811645   46354 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:26.817493   46354 retry.go:31] will retry after 177.884133ms: kubelet not initialised
	I0907 00:52:26.999917   46354 retry.go:31] will retry after 499.371742ms: kubelet not initialised
	I0907 00:52:27.504386   46354 retry.go:31] will retry after 692.030349ms: kubelet not initialised
	I0907 00:52:28.201498   46354 retry.go:31] will retry after 627.806419ms: kubelet not initialised
	I0907 00:52:25.358575   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.860612   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:25.229134   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.230538   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.729637   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:26.764040   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.264855   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:28.841483   46354 retry.go:31] will retry after 1.816521725s: kubelet not initialised
	I0907 00:52:30.664615   46354 retry.go:31] will retry after 1.888537042s: kubelet not initialised
	I0907 00:52:32.559591   46354 retry.go:31] will retry after 1.787314239s: kubelet not initialised
	I0907 00:52:30.358330   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.857719   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.229103   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.229797   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:31.265047   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:33.763354   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.353206   46354 retry.go:31] will retry after 5.20863166s: kubelet not initialised
	I0907 00:52:34.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:37.358005   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.229978   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.728934   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.264389   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.762232   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:39.567124   46354 retry.go:31] will retry after 8.04288108s: kubelet not initialised
	I0907 00:52:39.863004   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:42.359394   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.729770   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.236530   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.762994   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.263094   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.264328   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.616011   46354 retry.go:31] will retry after 4.959306281s: kubelet not initialised
	I0907 00:52:44.858665   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.359722   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.729067   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:48.228533   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.763985   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.263571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.580975   46354 retry.go:31] will retry after 19.653399141s: kubelet not initialised
	I0907 00:52:49.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.360050   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.361428   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.229168   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.229310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.229581   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.263685   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.762390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.857835   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.357322   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.728575   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.228623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.762553   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.263070   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.357560   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.358151   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.228910   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.728870   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.264341   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.764046   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.858279   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:07.861484   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.729314   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.229765   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:06.263532   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.763318   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.241966   46354 kubeadm.go:787] kubelet initialised
	I0907 00:53:12.242006   46354 kubeadm.go:788] duration metric: took 45.430332167s waiting for restarted kubelet to initialise ...
	I0907 00:53:12.242016   46354 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:53:12.247545   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253242   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.253264   46354 pod_ready.go:81] duration metric: took 5.697075ms waiting for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253276   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258467   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.258489   46354 pod_ready.go:81] duration metric: took 5.206456ms waiting for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258497   46354 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264371   46354 pod_ready.go:92] pod "etcd-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.264394   46354 pod_ready.go:81] duration metric: took 5.89143ms waiting for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264406   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269447   46354 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.269467   46354 pod_ready.go:81] duration metric: took 5.053466ms waiting for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269481   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638374   46354 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.638400   46354 pod_ready.go:81] duration metric: took 368.911592ms waiting for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638413   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039158   46354 pod_ready.go:92] pod "kube-proxy-2d8pb" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.039183   46354 pod_ready.go:81] duration metric: took 400.763103ms waiting for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039191   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:10.359605   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.361679   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:10.729293   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.229130   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:11.263595   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.268640   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.439450   46354 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.439477   46354 pod_ready.go:81] duration metric: took 400.279988ms waiting for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.439486   46354 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:15.746303   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.747193   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:14.858056   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:16.860373   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:19.361777   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.730623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:18.229790   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.763744   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.262360   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.246964   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.746507   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:21.361826   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.857891   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.729313   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.228479   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.263551   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:24.762509   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.246087   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:27.745946   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.858658   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.361105   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.732342   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.229971   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:26.763684   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.262971   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.746043   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.746133   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.857617   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.860863   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.728633   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.730094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.264742   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.764483   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.748648   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.246158   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.358908   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.361998   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.229141   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.729367   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.263505   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.264633   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.746190   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.751934   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:39.858993   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:41.860052   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.359421   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.228491   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:42.229143   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.229996   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.766539   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.264325   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.245475   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.245574   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.246524   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.857876   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.859569   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.230037   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.727940   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.763110   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.763211   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.264727   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:49.745339   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:51.746054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.859934   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:53.357432   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.729449   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.729731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.731191   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.763145   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.763847   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.246469   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.746034   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:55.357937   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.856743   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.227742   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.228654   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.764030   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.765416   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.746909   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.246396   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:02.357694   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:04.357907   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.229565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.729229   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.263126   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.764100   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.745703   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:05.745994   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.858308   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:09.357561   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.229604   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.727738   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.262721   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.263088   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.264022   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.246673   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.246999   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.746105   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:11.358384   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:13.358491   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.729593   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.732429   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.762306   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.263152   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:14.746491   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.245728   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.361153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.860338   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.229785   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.730926   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.733515   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.763593   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.264199   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.247271   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:21.251269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.360652   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.860291   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.229545   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.729109   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.264956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.764699   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:23.746737   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.747269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.357166   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.358248   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:26.729136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.226834   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.262945   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.763714   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:28.245784   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:30.245932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.745051   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.357600   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.361871   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:31.227731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:33.727721   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.262586   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.263485   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.745803   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.745877   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.858000   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.859206   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:35.729469   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.227947   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.763348   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.763533   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:39.245567   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.246549   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.859969   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.862293   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.228842   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.230064   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:44.732421   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.263587   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.762536   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.746104   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:46.247106   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.358648   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.858022   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.229847   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:49.729764   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.763352   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.263554   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.745911   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.746370   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.357129   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.357416   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.359626   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.228487   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.728565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.762919   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.764740   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.262939   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:53.248337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.746300   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.858127   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.358102   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.730045   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.227094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:57.263059   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.263696   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:58.247342   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:00.745494   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:02.748481   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.360153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.360737   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.227937   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.235852   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.263956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.246551   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.747587   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.858981   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.861146   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.729711   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.228310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.764163   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.263381   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.263936   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.247504   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.745798   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.360810   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.859446   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.229240   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.728782   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.729856   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.763565   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.263530   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.746534   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.246569   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.356953   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.358790   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:16.732983   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.228136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.264573   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.763137   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.745008   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.745932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.858109   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:22.358258   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.228589   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.729147   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.763580   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.746337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.748262   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:24.860943   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.357823   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.729423   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.731209   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.764235   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.263390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.254786   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.746056   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:29.859827   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:31.861387   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.862627   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.227830   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.227911   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:34.728680   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.762895   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.763333   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.262940   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.247352   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.247638   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.747011   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:36.356562   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:38.358379   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.227942   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.230445   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.264134   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.763848   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.245726   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.246951   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.858763   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.859176   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:41.729215   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.228235   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.263784   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.762310   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.747834   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:46.748669   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.361972   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:47.861601   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.453504   46768 pod_ready.go:81] duration metric: took 4m0.000384981s waiting for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:45.453536   46768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:45.453557   46768 pod_ready.go:38] duration metric: took 4m14.103603262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:45.453586   46768 kubeadm.go:640] restartCluster took 4m33.861797616s
	W0907 00:55:45.453681   46768 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:55:45.453721   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:55:46.762627   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:48.764174   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:49.247771   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:51.747171   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:50.361591   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:52.362641   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.550366   46833 pod_ready.go:81] duration metric: took 4m0.000125687s waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:53.550409   46833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:53.550421   46833 pod_ready.go:38] duration metric: took 4m5.601345022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:53.550444   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:55:53.550477   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:53.550553   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:53.601802   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:53.601823   46833 cri.go:89] found id: ""
	I0907 00:55:53.601831   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:53.601892   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.606465   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:53.606555   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:53.643479   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.643509   46833 cri.go:89] found id: ""
	I0907 00:55:53.643516   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:53.643562   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.648049   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:53.648101   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:53.679620   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:53.679648   46833 cri.go:89] found id: ""
	I0907 00:55:53.679658   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:53.679706   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.684665   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:53.684721   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:53.725282   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.725302   46833 cri.go:89] found id: ""
	I0907 00:55:53.725309   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:53.725364   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.729555   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:53.729627   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:53.761846   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:53.761875   46833 cri.go:89] found id: ""
	I0907 00:55:53.761883   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:53.761930   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.766451   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:53.766523   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:53.800099   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:53.800118   46833 cri.go:89] found id: ""
	I0907 00:55:53.800124   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:53.800168   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.804614   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:53.804676   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:53.841198   46833 cri.go:89] found id: ""
	I0907 00:55:53.841219   46833 logs.go:284] 0 containers: []
	W0907 00:55:53.841225   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:53.841230   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:53.841288   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:53.883044   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:53.883071   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:53.883077   46833 cri.go:89] found id: ""
	I0907 00:55:53.883085   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:53.883133   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.887172   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.891540   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:53.891566   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.944734   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:53.944765   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.979803   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:53.979832   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:54.015131   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:54.015159   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:54.062445   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:54.062478   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:54.097313   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:54.097343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:54.685400   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:54.685442   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:51.262853   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.764766   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.248875   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:56.746538   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.836523   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:54.836555   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:54.885972   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:54.886002   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:54.918966   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:54.919000   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:54.951966   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:54.951996   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:54.991382   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:54.991418   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:55.048526   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:55.048561   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:57.564574   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:55:57.579844   46833 api_server.go:72] duration metric: took 4m15.68090954s to wait for apiserver process to appear ...
	I0907 00:55:57.579867   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:55:57.579899   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:57.579963   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:57.619205   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:57.619225   46833 cri.go:89] found id: ""
	I0907 00:55:57.619235   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:57.619287   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.623884   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:57.623962   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:57.653873   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:57.653899   46833 cri.go:89] found id: ""
	I0907 00:55:57.653907   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:57.653967   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.658155   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:57.658219   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:57.688169   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:57.688195   46833 cri.go:89] found id: ""
	I0907 00:55:57.688203   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:57.688256   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.692208   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:57.692274   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:57.722477   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:57.722498   46833 cri.go:89] found id: ""
	I0907 00:55:57.722505   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:57.722548   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.726875   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:57.726926   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:57.768681   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:57.768709   46833 cri.go:89] found id: ""
	I0907 00:55:57.768718   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:57.768768   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.773562   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:57.773654   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:57.806133   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:57.806158   46833 cri.go:89] found id: ""
	I0907 00:55:57.806166   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:57.806222   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.810401   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:57.810446   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:57.840346   46833 cri.go:89] found id: ""
	I0907 00:55:57.840371   46833 logs.go:284] 0 containers: []
	W0907 00:55:57.840379   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:57.840384   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:57.840435   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:57.869978   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:57.869998   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:57.870002   46833 cri.go:89] found id: ""
	I0907 00:55:57.870008   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:57.870052   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.874945   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.878942   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:57.878964   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:58.015009   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:58.015035   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:58.063331   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:58.063365   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:58.098316   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:58.098343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:58.140312   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:58.140342   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:58.170471   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:58.170499   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:58.217775   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:58.217804   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:58.275681   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:58.275717   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:58.323629   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:58.323663   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:58.360608   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:58.360636   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:58.397158   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:58.397193   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:58.435395   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:58.435425   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:59.023632   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:59.023687   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:55.767692   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:58.262808   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:00.263787   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:59.246042   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.746441   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.540667   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:56:01.548176   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:56:01.549418   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:01.549443   46833 api_server.go:131] duration metric: took 3.969568684s to wait for apiserver health ...
	I0907 00:56:01.549451   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:01.549474   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:01.549546   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:01.579945   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:01.579975   46833 cri.go:89] found id: ""
	I0907 00:56:01.579985   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:56:01.580038   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.584609   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:01.584673   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:01.628626   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:01.628647   46833 cri.go:89] found id: ""
	I0907 00:56:01.628656   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:56:01.628711   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.633293   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:01.633362   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:01.663898   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.663923   46833 cri.go:89] found id: ""
	I0907 00:56:01.663932   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:56:01.663994   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.668130   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:01.668198   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:01.699021   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.699045   46833 cri.go:89] found id: ""
	I0907 00:56:01.699055   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:56:01.699107   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.703470   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:01.703536   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:01.740360   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:01.740387   46833 cri.go:89] found id: ""
	I0907 00:56:01.740396   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:56:01.740450   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.747366   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:01.747445   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:01.783175   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.783218   46833 cri.go:89] found id: ""
	I0907 00:56:01.783226   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:56:01.783267   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.787565   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:01.787628   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:01.822700   46833 cri.go:89] found id: ""
	I0907 00:56:01.822730   46833 logs.go:284] 0 containers: []
	W0907 00:56:01.822740   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:01.822747   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:01.822818   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:01.853909   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:01.853934   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:01.853938   46833 cri.go:89] found id: ""
	I0907 00:56:01.853945   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:56:01.853990   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.858209   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.862034   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:56:01.862053   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.902881   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:56:01.902915   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.937846   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:56:01.937882   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.993495   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:56:01.993526   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:02.029773   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:56:02.029810   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:02.076180   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:02.076210   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:02.133234   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:02.133268   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:02.278183   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:56:02.278209   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:02.325096   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:56:02.325125   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:02.362517   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:56:02.362542   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:02.393393   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:02.393430   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:02.950480   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:02.950521   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:02.967628   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:56:02.967658   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:05.533216   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:05.533249   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.533257   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.533264   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.533271   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.533276   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.533283   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.533292   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.533305   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.533315   46833 system_pods.go:74] duration metric: took 3.983859289s to wait for pod list to return data ...
	I0907 00:56:05.533327   46833 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:05.536806   46833 default_sa.go:45] found service account: "default"
	I0907 00:56:05.536833   46833 default_sa.go:55] duration metric: took 3.496147ms for default service account to be created ...
	I0907 00:56:05.536842   46833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:05.543284   46833 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:05.543310   46833 system_pods.go:89] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.543318   46833 system_pods.go:89] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.543325   46833 system_pods.go:89] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.543332   46833 system_pods.go:89] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.543337   46833 system_pods.go:89] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.543344   46833 system_pods.go:89] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.543355   46833 system_pods.go:89] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.543367   46833 system_pods.go:89] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.543377   46833 system_pods.go:126] duration metric: took 6.528914ms to wait for k8s-apps to be running ...
	I0907 00:56:05.543391   46833 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:05.543437   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:05.559581   46833 system_svc.go:56] duration metric: took 16.174514ms WaitForService to wait for kubelet.
	I0907 00:56:05.559613   46833 kubeadm.go:581] duration metric: took 4m23.660681176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:05.559638   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:05.564521   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:05.564552   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:05.564566   46833 node_conditions.go:105] duration metric: took 4.922449ms to run NodePressure ...
	I0907 00:56:05.564579   46833 start.go:228] waiting for startup goroutines ...
	I0907 00:56:05.564589   46833 start.go:233] waiting for cluster config update ...
	I0907 00:56:05.564609   46833 start.go:242] writing updated cluster config ...
	I0907 00:56:05.564968   46833 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:05.618906   46833 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:05.620461   46833 out.go:177] * Done! kubectl is now configured to use "embed-certs-546209" cluster and "default" namespace by default
	I0907 00:56:02.763702   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:05.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:04.246390   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:06.246925   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:07.762598   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:09.764581   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:08.746379   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:11.246764   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.263747   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.364712   47297 pod_ready.go:81] duration metric: took 4m0.00109115s waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:56:12.364763   47297 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:56:12.364776   47297 pod_ready.go:38] duration metric: took 4m3.209409487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:12.364799   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:12.364833   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:12.364891   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:12.416735   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:12.416760   47297 cri.go:89] found id: ""
	I0907 00:56:12.416767   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:12.416818   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.423778   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:12.423849   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:12.465058   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.465086   47297 cri.go:89] found id: ""
	I0907 00:56:12.465095   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:12.465152   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.471730   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:12.471793   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:12.508984   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.509005   47297 cri.go:89] found id: ""
	I0907 00:56:12.509017   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:12.509073   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.513689   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:12.513745   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:12.550233   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:12.550257   47297 cri.go:89] found id: ""
	I0907 00:56:12.550266   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:12.550325   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.556588   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:12.556665   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:12.598826   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:12.598853   47297 cri.go:89] found id: ""
	I0907 00:56:12.598862   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:12.598913   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.603710   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:12.603778   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:12.645139   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:12.645169   47297 cri.go:89] found id: ""
	I0907 00:56:12.645179   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:12.645236   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.650685   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:12.650755   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:12.686256   47297 cri.go:89] found id: ""
	I0907 00:56:12.686284   47297 logs.go:284] 0 containers: []
	W0907 00:56:12.686291   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:12.686297   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:12.686349   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:12.719614   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.719638   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:12.719645   47297 cri.go:89] found id: ""
	I0907 00:56:12.719655   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:12.719713   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.724842   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.728880   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:12.728899   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.771051   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:12.771081   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.812110   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:12.812140   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.847819   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:12.847845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:13.436674   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:13.436711   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:13.454385   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:13.454425   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:13.617809   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:13.617838   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:13.652209   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:13.652239   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:13.683939   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:13.683977   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:13.730116   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:13.730151   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:13.763253   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:13.763278   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:13.804890   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:13.804918   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:13.861822   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:13.861856   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.242461   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.788701806s)
	I0907 00:56:17.242546   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:17.259241   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:56:17.268943   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:56:17.278094   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:56:17.278138   46768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:56:17.342868   46768 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:56:17.342981   46768 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:56:17.519943   46768 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:56:17.520089   46768 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:56:17.520214   46768 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:56:17.714902   46768 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:56:13.247487   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:15.746162   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.748049   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.716739   46768 out.go:204]   - Generating certificates and keys ...
	I0907 00:56:17.716894   46768 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:56:17.717007   46768 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:56:17.717113   46768 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:56:17.717361   46768 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:56:17.717892   46768 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:56:17.718821   46768 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:56:17.719502   46768 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:56:17.719996   46768 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:56:17.720644   46768 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:56:17.721254   46768 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:56:17.721832   46768 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:56:17.721911   46768 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:56:17.959453   46768 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:56:18.029012   46768 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:56:18.146402   46768 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:56:18.309148   46768 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:56:18.309726   46768 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:56:18.312628   46768 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:56:18.315593   46768 out.go:204]   - Booting up control plane ...
	I0907 00:56:18.315744   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:56:18.315870   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:56:18.317157   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:56:18.336536   46768 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:56:18.336947   46768 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:56:18.337042   46768 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:56:18.472759   46768 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:56:16.415279   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:16.431021   47297 api_server.go:72] duration metric: took 4m14.6757965s to wait for apiserver process to appear ...
	I0907 00:56:16.431047   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:16.431086   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:16.431144   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:16.474048   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:16.474075   47297 cri.go:89] found id: ""
	I0907 00:56:16.474085   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:16.474141   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.478873   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:16.478956   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:16.512799   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.512817   47297 cri.go:89] found id: ""
	I0907 00:56:16.512824   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:16.512880   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.518717   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:16.518812   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:16.553996   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:16.554016   47297 cri.go:89] found id: ""
	I0907 00:56:16.554023   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:16.554066   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.559358   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:16.559422   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:16.598717   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:16.598739   47297 cri.go:89] found id: ""
	I0907 00:56:16.598746   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:16.598821   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.603704   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:16.603766   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:16.646900   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:16.646928   47297 cri.go:89] found id: ""
	I0907 00:56:16.646937   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:16.646995   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.651216   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:16.651287   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:16.681334   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:16.681361   47297 cri.go:89] found id: ""
	I0907 00:56:16.681374   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:16.681429   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.685963   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:16.686028   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:16.720214   47297 cri.go:89] found id: ""
	I0907 00:56:16.720243   47297 logs.go:284] 0 containers: []
	W0907 00:56:16.720253   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:16.720259   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:16.720316   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:16.756411   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:16.756437   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:16.756444   47297 cri.go:89] found id: ""
	I0907 00:56:16.756452   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:16.756512   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.762211   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.767635   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:16.767659   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:16.784092   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:16.784122   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:16.936817   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:16.936845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.979426   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:16.979455   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:17.009878   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:17.009912   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:17.048086   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:17.048113   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:17.103114   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:17.103156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:17.139125   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:17.139163   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:17.181560   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:17.181588   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:17.224815   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:17.224841   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:17.299438   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:17.299474   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.355165   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:17.355197   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:17.403781   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:17.403809   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:20.491060   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:56:20.498573   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:56:20.501753   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:20.501774   47297 api_server.go:131] duration metric: took 4.070720466s to wait for apiserver health ...
	I0907 00:56:20.501782   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:20.501807   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:20.501856   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:20.545524   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:20.545550   47297 cri.go:89] found id: ""
	I0907 00:56:20.545560   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:20.545616   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.552051   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:20.552120   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:20.593019   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:20.593041   47297 cri.go:89] found id: ""
	I0907 00:56:20.593049   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:20.593104   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.598430   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:20.598500   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:20.639380   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:20.639407   47297 cri.go:89] found id: ""
	I0907 00:56:20.639417   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:20.639507   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.645270   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:20.645342   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:20.247030   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:22.247132   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:20.684338   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:20.684368   47297 cri.go:89] found id: ""
	I0907 00:56:20.684378   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:20.684438   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.689465   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:20.689528   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:20.727854   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.727879   47297 cri.go:89] found id: ""
	I0907 00:56:20.727887   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:20.727938   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.733320   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:20.733389   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:20.776584   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:20.776607   47297 cri.go:89] found id: ""
	I0907 00:56:20.776614   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:20.776659   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.781745   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:20.781822   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:20.817720   47297 cri.go:89] found id: ""
	I0907 00:56:20.817746   47297 logs.go:284] 0 containers: []
	W0907 00:56:20.817756   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:20.817763   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:20.817819   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:20.857693   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.857716   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.857723   47297 cri.go:89] found id: ""
	I0907 00:56:20.857732   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:20.857788   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.862242   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.866469   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:20.866489   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.907476   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:20.907514   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.946383   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:20.946418   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.983830   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:20.983858   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:21.572473   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:21.572524   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:21.626465   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:21.626496   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:21.692455   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:21.692491   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:21.712600   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:21.712632   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:21.855914   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:21.855948   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:21.909035   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:21.909068   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:21.961286   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:21.961317   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:22.002150   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:22.002177   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:22.035129   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:22.035156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:24.592419   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:24.592455   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.592460   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.592464   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.592469   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.592473   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.592477   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.592483   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.592489   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.592494   47297 system_pods.go:74] duration metric: took 4.090707422s to wait for pod list to return data ...
	I0907 00:56:24.592501   47297 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:24.596106   47297 default_sa.go:45] found service account: "default"
	I0907 00:56:24.596127   47297 default_sa.go:55] duration metric: took 3.621408ms for default service account to be created ...
	I0907 00:56:24.596134   47297 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:24.601998   47297 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:24.602021   47297 system_pods.go:89] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.602026   47297 system_pods.go:89] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.602032   47297 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.602037   47297 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.602041   47297 system_pods.go:89] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.602046   47297 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.602054   47297 system_pods.go:89] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.602063   47297 system_pods.go:89] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.602069   47297 system_pods.go:126] duration metric: took 5.931212ms to wait for k8s-apps to be running ...
	I0907 00:56:24.602076   47297 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:24.602116   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:24.623704   47297 system_svc.go:56] duration metric: took 21.617229ms WaitForService to wait for kubelet.
	I0907 00:56:24.623734   47297 kubeadm.go:581] duration metric: took 4m22.868513281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:24.623754   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:24.628408   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:24.628435   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:24.628444   47297 node_conditions.go:105] duration metric: took 4.686272ms to run NodePressure ...
	I0907 00:56:24.628454   47297 start.go:228] waiting for startup goroutines ...
	I0907 00:56:24.628460   47297 start.go:233] waiting for cluster config update ...
	I0907 00:56:24.628469   47297 start.go:242] writing updated cluster config ...
	I0907 00:56:24.628735   47297 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:24.683237   47297 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:24.686336   47297 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-773466" cluster and "default" namespace by default
	I0907 00:56:26.977381   46768 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503998 seconds
	I0907 00:56:26.977624   46768 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:56:27.000116   46768 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:56:27.541598   46768 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:56:27.541809   46768 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-321164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:56:28.055045   46768 kubeadm.go:322] [bootstrap-token] Using token: 7x1950.9u417zcplp1q0xai
	I0907 00:56:24.247241   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:26.773163   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:28.056582   46768 out.go:204]   - Configuring RBAC rules ...
	I0907 00:56:28.056725   46768 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:56:28.065256   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:56:28.075804   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:56:28.081996   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:56:28.090825   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:56:28.097257   46768 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:56:28.114787   46768 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:56:28.337001   46768 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:56:28.476411   46768 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:56:28.479682   46768 kubeadm.go:322] 
	I0907 00:56:28.479784   46768 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:56:28.479799   46768 kubeadm.go:322] 
	I0907 00:56:28.479898   46768 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:56:28.479912   46768 kubeadm.go:322] 
	I0907 00:56:28.479943   46768 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:56:28.480046   46768 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:56:28.480143   46768 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:56:28.480163   46768 kubeadm.go:322] 
	I0907 00:56:28.480343   46768 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:56:28.480361   46768 kubeadm.go:322] 
	I0907 00:56:28.480431   46768 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:56:28.480450   46768 kubeadm.go:322] 
	I0907 00:56:28.480544   46768 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:56:28.480656   46768 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:56:28.480783   46768 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:56:28.480796   46768 kubeadm.go:322] 
	I0907 00:56:28.480924   46768 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:56:28.481024   46768 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:56:28.481034   46768 kubeadm.go:322] 
	I0907 00:56:28.481117   46768 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481203   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:56:28.481223   46768 kubeadm.go:322] 	--control-plane 
	I0907 00:56:28.481226   46768 kubeadm.go:322] 
	I0907 00:56:28.481346   46768 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:56:28.481355   46768 kubeadm.go:322] 
	I0907 00:56:28.481453   46768 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481572   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:56:28.482216   46768 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:56:28.482238   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:56:28.482248   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:56:28.484094   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:56:28.485597   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:56:28.537400   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:56:28.577654   46768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:56:28.577734   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.577747   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=no-preload-321164 minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.909178   46768 ops.go:34] apiserver oom_adj: -16
	I0907 00:56:28.920821   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.027812   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.627489   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:30.127554   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.246606   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:31.746291   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:30.627315   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.127759   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.627183   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.127488   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.627464   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.126850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.626901   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.126917   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.626850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:35.127788   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.747054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.747536   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.627454   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.126916   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.626926   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.126845   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.627579   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.126885   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.627849   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.127371   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.627929   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.127775   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.627392   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.760535   46768 kubeadm.go:1081] duration metric: took 12.182860946s to wait for elevateKubeSystemPrivileges.
	I0907 00:56:40.760574   46768 kubeadm.go:406] StartCluster complete in 5m29.209699324s
	I0907 00:56:40.760594   46768 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.760690   46768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:56:40.762820   46768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.763132   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:56:40.763152   46768 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:56:40.763245   46768 addons.go:69] Setting storage-provisioner=true in profile "no-preload-321164"
	I0907 00:56:40.763251   46768 addons.go:69] Setting default-storageclass=true in profile "no-preload-321164"
	I0907 00:56:40.763263   46768 addons.go:231] Setting addon storage-provisioner=true in "no-preload-321164"
	W0907 00:56:40.763271   46768 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:56:40.763272   46768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-321164"
	I0907 00:56:40.763314   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763357   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:56:40.763404   46768 addons.go:69] Setting metrics-server=true in profile "no-preload-321164"
	I0907 00:56:40.763421   46768 addons.go:231] Setting addon metrics-server=true in "no-preload-321164"
	W0907 00:56:40.763428   46768 addons.go:240] addon metrics-server should already be in state true
	I0907 00:56:40.763464   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763718   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763747   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763772   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763793   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763811   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763833   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.781727   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0907 00:56:40.781738   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0907 00:56:40.781741   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0907 00:56:40.782188   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782279   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782332   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782702   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782724   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782856   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782873   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782879   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782894   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.783096   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783306   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783354   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783531   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.783686   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783717   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.783905   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783949   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.801244   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0907 00:56:40.801534   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0907 00:56:40.801961   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802064   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802509   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802529   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802673   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802689   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802942   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803153   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.803218   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803365   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.804775   46768 addons.go:231] Setting addon default-storageclass=true in "no-preload-321164"
	W0907 00:56:40.804798   46768 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:56:40.804828   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.805191   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.805490   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.807809   46768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:56:40.806890   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.809154   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.809188   46768 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:40.809199   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:56:40.809215   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809249   46768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:56:40.810543   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:56:40.810557   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:56:40.810570   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809485   46768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-321164" context rescaled to 1 replicas
	I0907 00:56:40.810637   46768 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:56:40.813528   46768 out.go:177] * Verifying Kubernetes components...
	I0907 00:56:38.246743   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.747015   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.814976   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:40.817948   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818029   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818080   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818100   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818117   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818137   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818156   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818175   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818282   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818348   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818462   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.818676   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.827224   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0907 00:56:40.827578   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.828106   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.828122   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.828464   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.829012   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.829043   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.843423   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0907 00:56:40.843768   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.844218   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.844236   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.844529   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.844735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.846265   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.846489   46768 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:40.846506   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:56:40.846525   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.849325   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849666   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.849704   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849897   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.850103   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.850251   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.850397   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.965966   46768 node_ready.go:35] waiting up to 6m0s for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.966030   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:56:40.997127   46768 node_ready.go:49] node "no-preload-321164" has status "Ready":"True"
	I0907 00:56:40.997149   46768 node_ready.go:38] duration metric: took 31.151467ms waiting for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.997158   46768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:41.010753   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:41.011536   46768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:41.022410   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:56:41.022431   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:56:41.051599   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:41.119566   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:56:41.119594   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:56:41.228422   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:41.228443   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:56:41.321420   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:42.776406   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810334575s)
	I0907 00:56:42.776435   46768 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 00:56:43.385184   46768 pod_ready.go:102] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:43.446190   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435398332s)
	I0907 00:56:43.446240   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.446248   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3946112s)
	I0907 00:56:43.446255   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449355   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449362   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449377   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.449389   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.449406   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449732   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449771   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449787   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450189   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450216   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.450653   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.450672   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450682   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450691   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451532   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.451597   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451619   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451635   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.451648   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451869   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451885   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451895   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689511   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.368045812s)
	I0907 00:56:43.689565   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.689579   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.689952   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.689963   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689974   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.689991   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.690001   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.690291   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.690307   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.690309   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.690322   46768 addons.go:467] Verifying addon metrics-server=true in "no-preload-321164"
	I0907 00:56:43.693105   46768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:56:43.694562   46768 addons.go:502] enable addons completed in 2.931409197s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:56:45.310723   46768 pod_ready.go:92] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.310742   46768 pod_ready.go:81] duration metric: took 4.299181671s waiting for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.310753   46768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316350   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.316373   46768 pod_ready.go:81] duration metric: took 5.614264ms waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316385   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321183   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.321205   46768 pod_ready.go:81] duration metric: took 4.811919ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321216   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326279   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.326297   46768 pod_ready.go:81] duration metric: took 5.0741ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326308   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332665   46768 pod_ready.go:92] pod "kube-proxy-st6n8" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.332687   46768 pod_ready.go:81] duration metric: took 6.372253ms waiting for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332697   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708023   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.708044   46768 pod_ready.go:81] duration metric: took 375.339873ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708051   46768 pod_ready.go:38] duration metric: took 4.710884592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:45.708065   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:45.708106   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:45.725929   46768 api_server.go:72] duration metric: took 4.915250734s to wait for apiserver process to appear ...
	I0907 00:56:45.725950   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:45.725964   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:56:45.731998   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:56:45.733492   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:45.733507   46768 api_server.go:131] duration metric: took 7.552661ms to wait for apiserver health ...
	I0907 00:56:45.733514   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:45.911337   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:45.911374   46768 system_pods.go:61] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:45.911383   46768 system_pods.go:61] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:45.911389   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:45.911397   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:45.911403   46768 system_pods.go:61] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:45.911410   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:45.911421   46768 system_pods.go:61] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:45.911435   46768 system_pods.go:61] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:45.911443   46768 system_pods.go:74] duration metric: took 177.923008ms to wait for pod list to return data ...
	I0907 00:56:45.911455   46768 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:46.107121   46768 default_sa.go:45] found service account: "default"
	I0907 00:56:46.107149   46768 default_sa.go:55] duration metric: took 195.685496ms for default service account to be created ...
	I0907 00:56:46.107159   46768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:46.314551   46768 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:46.314588   46768 system_pods.go:89] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:46.314596   46768 system_pods.go:89] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:46.314603   46768 system_pods.go:89] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:46.314611   46768 system_pods.go:89] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:46.314618   46768 system_pods.go:89] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:46.314624   46768 system_pods.go:89] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:46.314634   46768 system_pods.go:89] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:46.314645   46768 system_pods.go:89] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:46.314653   46768 system_pods.go:126] duration metric: took 207.48874ms to wait for k8s-apps to be running ...
	I0907 00:56:46.314663   46768 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:46.314713   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:46.331286   46768 system_svc.go:56] duration metric: took 16.613382ms WaitForService to wait for kubelet.
	I0907 00:56:46.331316   46768 kubeadm.go:581] duration metric: took 5.520640777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:46.331342   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:46.507374   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:46.507398   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:46.507406   46768 node_conditions.go:105] duration metric: took 176.059527ms to run NodePressure ...
	I0907 00:56:46.507417   46768 start.go:228] waiting for startup goroutines ...
	I0907 00:56:46.507422   46768 start.go:233] waiting for cluster config update ...
	I0907 00:56:46.507433   46768 start.go:242] writing updated cluster config ...
	I0907 00:56:46.507728   46768 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:46.559712   46768 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:46.561693   46768 out.go:177] * Done! kubectl is now configured to use "no-preload-321164" cluster and "default" namespace by default
	I0907 00:56:43.245531   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:45.746168   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:48.247228   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:50.746605   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:52.748264   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:55.246186   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:57.746658   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:00.245358   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:02.246373   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:04.746154   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:07.245583   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:09.246215   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:11.247141   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.247249   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.440321   46354 pod_ready.go:81] duration metric: took 4m0.000811237s waiting for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	E0907 00:57:13.440352   46354 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:57:13.440368   46354 pod_ready.go:38] duration metric: took 4m1.198343499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:13.440395   46354 kubeadm.go:640] restartCluster took 5m7.071390852s
	W0907 00:57:13.440463   46354 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:57:13.440538   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:57:26.505313   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.064737983s)
	I0907 00:57:26.505392   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:26.521194   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:57:26.530743   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:57:26.540431   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:57:26.540473   46354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0907 00:57:26.744360   46354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:57:39.131760   46354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0907 00:57:39.131857   46354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:57:39.131964   46354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:57:39.132110   46354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:57:39.132226   46354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:57:39.132360   46354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:57:39.132501   46354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:57:39.132573   46354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0907 00:57:39.132654   46354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:57:39.134121   46354 out.go:204]   - Generating certificates and keys ...
	I0907 00:57:39.134212   46354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:57:39.134313   46354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:57:39.134422   46354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:57:39.134501   46354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:57:39.134605   46354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:57:39.134688   46354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:57:39.134801   46354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:57:39.134902   46354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:57:39.135010   46354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:57:39.135121   46354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:57:39.135169   46354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:57:39.135241   46354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:57:39.135308   46354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:57:39.135393   46354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:57:39.135512   46354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:57:39.135599   46354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:57:39.135700   46354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:57:39.137273   46354 out.go:204]   - Booting up control plane ...
	I0907 00:57:39.137369   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:57:39.137458   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:57:39.137561   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:57:39.137677   46354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:57:39.137888   46354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:57:39.138013   46354 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503675 seconds
	I0907 00:57:39.138137   46354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:57:39.138249   46354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:57:39.138297   46354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:57:39.138402   46354 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-940806 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0907 00:57:39.138453   46354 kubeadm.go:322] [bootstrap-token] Using token: nfcsq1.o4ef3s2bthacz2l0
	I0907 00:57:39.139754   46354 out.go:204]   - Configuring RBAC rules ...
	I0907 00:57:39.139848   46354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:57:39.139970   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:57:39.140112   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:57:39.140245   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:57:39.140327   46354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:57:39.140393   46354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:57:39.140442   46354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:57:39.140452   46354 kubeadm.go:322] 
	I0907 00:57:39.140525   46354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:57:39.140533   46354 kubeadm.go:322] 
	I0907 00:57:39.140628   46354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:57:39.140635   46354 kubeadm.go:322] 
	I0907 00:57:39.140665   46354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:57:39.140752   46354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:57:39.140822   46354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:57:39.140834   46354 kubeadm.go:322] 
	I0907 00:57:39.140896   46354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:57:39.140960   46354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:57:39.141043   46354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:57:39.141051   46354 kubeadm.go:322] 
	I0907 00:57:39.141159   46354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0907 00:57:39.141262   46354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:57:39.141276   46354 kubeadm.go:322] 
	I0907 00:57:39.141407   46354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141536   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:57:39.141568   46354 kubeadm.go:322]     --control-plane 	  
	I0907 00:57:39.141575   46354 kubeadm.go:322] 
	I0907 00:57:39.141657   46354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:57:39.141665   46354 kubeadm.go:322] 
	I0907 00:57:39.141730   46354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141832   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:57:39.141851   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:57:39.141863   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:57:39.143462   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:57:39.144982   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:57:39.158663   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:57:39.180662   46354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:57:39.180747   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.180749   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=old-k8s-version-940806 minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.208969   46354 ops.go:34] apiserver oom_adj: -16
	I0907 00:57:39.426346   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.545090   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.162127   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.662172   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.162069   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.662164   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.162355   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.662152   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.161862   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.661532   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.162130   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.661948   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.162260   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.662082   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.162345   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.662378   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.162307   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.662556   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.162204   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.661938   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.161608   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.662198   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.162016   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.662392   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.162303   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.662393   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.162510   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.662195   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.162302   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.662427   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.162085   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.662218   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.779895   46354 kubeadm.go:1081] duration metric: took 15.599222217s to wait for elevateKubeSystemPrivileges.
	I0907 00:57:54.779927   46354 kubeadm.go:406] StartCluster complete in 5m48.456500898s
	I0907 00:57:54.779949   46354 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.780038   46354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:57:54.782334   46354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.782624   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:57:54.782772   46354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:57:54.782871   46354 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782890   46354 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782900   46354 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-940806"
	W0907 00:57:54.782908   46354 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:57:54.782918   46354 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-940806"
	W0907 00:57:54.782926   46354 addons.go:240] addon metrics-server should already be in state true
	I0907 00:57:54.782880   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:57:54.782889   46354 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-940806"
	I0907 00:57:54.783049   46354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-940806"
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.783499   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783500   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783528   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783533   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783571   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783599   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.802026   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0907 00:57:54.802487   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803108   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.803131   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0907 00:57:54.803512   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.803674   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803710   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.804184   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.804215   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.804239   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804259   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804311   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804327   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804569   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804668   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804832   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.805067   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.805094   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.821660   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0907 00:57:54.822183   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.822694   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.822720   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.823047   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.823247   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.823707   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0907 00:57:54.824135   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.825021   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.825046   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.825082   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.827174   46354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:57:54.825428   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.828768   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:57:54.828787   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:57:54.828808   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.829357   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.831479   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.833553   46354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:57:54.832288   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.832776   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.834996   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.835038   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.835055   46354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:54.835067   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:57:54.835083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.835140   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.835307   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.835410   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.836403   46354 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-940806"
	W0907 00:57:54.836424   46354 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:57:54.836451   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.836822   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.836851   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.838476   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.838920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.838951   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.839218   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.839540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.839719   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.839896   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.854883   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0907 00:57:54.855311   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.855830   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.855858   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.856244   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.856713   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.856737   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.872940   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0907 00:57:54.873442   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.874030   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.874057   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.874433   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.874665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.876568   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.876928   46354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:54.876947   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:57:54.876966   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.879761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.879993   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.880015   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.880248   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.880424   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.880591   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.880694   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.933915   46354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-940806" context rescaled to 1 replicas
	I0907 00:57:54.933965   46354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:57:54.936214   46354 out.go:177] * Verifying Kubernetes components...
	I0907 00:57:54.937844   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:55.011087   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:57:55.011114   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:57:55.020666   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:55.038411   46354 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.038474   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:57:55.066358   46354 node_ready.go:49] node "old-k8s-version-940806" has status "Ready":"True"
	I0907 00:57:55.066382   46354 node_ready.go:38] duration metric: took 27.94281ms waiting for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.066393   46354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:55.076936   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	I0907 00:57:55.118806   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:57:55.118835   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:57:55.145653   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:55.158613   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:55.158636   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:57:55.214719   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:56.905329   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.884630053s)
	I0907 00:57:56.905379   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905377   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866875113s)
	I0907 00:57:56.905392   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905403   46354 start.go:901] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0907 00:57:56.905417   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759735751s)
	I0907 00:57:56.905441   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905455   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905794   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905842   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905858   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.905878   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.905895   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905910   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905963   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906013   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906037   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906047   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906286   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906340   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906293   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906325   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906436   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906449   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906459   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906630   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906729   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906732   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906749   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.087889   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.873113752s)
	I0907 00:57:57.087946   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.087979   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.088366   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:57.089849   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.089880   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.089892   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.089899   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.090126   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.090146   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.090155   46354 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-940806"
	I0907 00:57:57.093060   46354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:57:57.094326   46354 addons.go:502] enable addons completed in 2.311555161s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:57:57.115594   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:59.609005   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:58:00.605260   46354 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605285   46354 pod_ready.go:81] duration metric: took 5.528319392s waiting for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	E0907 00:58:00.605296   46354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605305   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.623994   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.624020   46354 pod_ready.go:81] duration metric: took 2.01870868s waiting for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.624039   46354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629264   46354 pod_ready.go:92] pod "kube-proxy-bt454" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.629282   46354 pod_ready.go:81] duration metric: took 5.236562ms waiting for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629288   46354 pod_ready.go:38] duration metric: took 7.562884581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:58:02.629301   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:58:02.629339   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:58:02.644494   46354 api_server.go:72] duration metric: took 7.710498225s to wait for apiserver process to appear ...
	I0907 00:58:02.644515   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:58:02.644529   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:58:02.651352   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:58:02.652147   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:58:02.652186   46354 api_server.go:131] duration metric: took 7.646808ms to wait for apiserver health ...
	I0907 00:58:02.652199   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:58:02.656482   46354 system_pods.go:59] 4 kube-system pods found
	I0907 00:58:02.656506   46354 system_pods.go:61] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.656513   46354 system_pods.go:61] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.656524   46354 system_pods.go:61] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.656534   46354 system_pods.go:61] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.656541   46354 system_pods.go:74] duration metric: took 4.333279ms to wait for pod list to return data ...
	I0907 00:58:02.656553   46354 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:58:02.659079   46354 default_sa.go:45] found service account: "default"
	I0907 00:58:02.659102   46354 default_sa.go:55] duration metric: took 2.543265ms for default service account to be created ...
	I0907 00:58:02.659110   46354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:58:02.663028   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.663050   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.663058   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.663069   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.663077   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.663094   46354 retry.go:31] will retry after 205.506153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:02.874261   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.874291   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.874299   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.874309   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.874318   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.874335   46354 retry.go:31] will retry after 265.617543ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.145704   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.145736   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.145745   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.145755   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.145764   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.145782   46354 retry.go:31] will retry after 459.115577ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.610425   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.610458   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.610466   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.610474   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.610482   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.610498   46354 retry.go:31] will retry after 411.97961ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.026961   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.026992   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.026997   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.027004   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.027011   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.027024   46354 retry.go:31] will retry after 633.680519ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.665840   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.665868   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.665877   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.665889   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.665899   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.665916   46354 retry.go:31] will retry after 680.962565ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:05.352621   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:05.352644   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:05.352652   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:05.352699   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:05.352710   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:05.352725   46354 retry.go:31] will retry after 939.996523ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:06.298740   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:06.298765   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:06.298770   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:06.298791   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:06.298803   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:06.298820   46354 retry.go:31] will retry after 1.103299964s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:07.407728   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:07.407753   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:07.407758   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:07.407766   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:07.407772   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:07.407785   46354 retry.go:31] will retry after 1.13694803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:08.550198   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:08.550228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:08.550236   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:08.550245   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:08.550252   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:08.550269   46354 retry.go:31] will retry after 2.240430665s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:10.796203   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:10.796228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:10.796233   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:10.796240   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:10.796246   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:10.796261   46354 retry.go:31] will retry after 2.183105097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:12.985467   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:12.985491   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:12.985500   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:12.985510   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:12.985518   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:12.985535   46354 retry.go:31] will retry after 2.428546683s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:15.419138   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:15.419163   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:15.419168   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:15.419174   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:15.419181   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:15.419195   46354 retry.go:31] will retry after 2.778392129s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:18.202590   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:18.202621   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:18.202629   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:18.202639   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:18.202648   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:18.202670   46354 retry.go:31] will retry after 5.204092587s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:23.412120   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:23.412144   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:23.412157   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:23.412164   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:23.412171   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:23.412187   46354 retry.go:31] will retry after 6.095121382s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:29.513424   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:29.513449   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:29.513454   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:29.513462   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:29.513468   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:29.513482   46354 retry.go:31] will retry after 6.142679131s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:35.662341   46354 system_pods.go:86] 5 kube-system pods found
	I0907 00:58:35.662367   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:35.662372   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:35.662377   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Pending
	I0907 00:58:35.662383   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:35.662390   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:35.662408   46354 retry.go:31] will retry after 10.800349656s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:46.468817   46354 system_pods.go:86] 6 kube-system pods found
	I0907 00:58:46.468845   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:46.468854   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:46.468859   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:46.468867   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:46.468876   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:46.468884   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:46.468901   46354 retry.go:31] will retry after 10.570531489s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:58:57.047784   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:58:57.047865   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:57.047892   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:57.048256   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Pending
	I0907 00:58:57.048272   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Pending
	I0907 00:58:57.048279   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:57.048286   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:57.048301   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:57.048315   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:57.048345   46354 retry.go:31] will retry after 14.06926028s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:59:11.124216   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:59:11.124242   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:59:11.124248   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:59:11.124252   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Running
	I0907 00:59:11.124257   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Running
	I0907 00:59:11.124261   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:59:11.124265   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:59:11.124272   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:59:11.124276   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:59:11.124283   46354 system_pods.go:126] duration metric: took 1m8.465167722s to wait for k8s-apps to be running ...
	I0907 00:59:11.124289   46354 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:59:11.124328   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:59:11.140651   46354 system_svc.go:56] duration metric: took 16.348641ms WaitForService to wait for kubelet.
	I0907 00:59:11.140686   46354 kubeadm.go:581] duration metric: took 1m16.206690472s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:59:11.140714   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:59:11.144185   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:59:11.144212   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:59:11.144224   46354 node_conditions.go:105] duration metric: took 3.50462ms to run NodePressure ...
	I0907 00:59:11.144235   46354 start.go:228] waiting for startup goroutines ...
	I0907 00:59:11.144244   46354 start.go:233] waiting for cluster config update ...
	I0907 00:59:11.144259   46354 start.go:242] writing updated cluster config ...
	I0907 00:59:11.144547   46354 ssh_runner.go:195] Run: rm -f paused
	I0907 00:59:11.194224   46354 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0907 00:59:11.196420   46354 out.go:177] 
	W0907 00:59:11.197939   46354 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0907 00:59:11.199287   46354 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0907 00:59:11.200770   46354 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-940806" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:46 UTC, ends at Thu 2023-09-07 01:08:12 UTC. --
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.054542419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c2ef91de-9587-461a-9df1-3fd63a18eab3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.678175715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c1d0e847-d24f-4a0d-bea7-e689151190d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.678276336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c1d0e847-d24f-4a0d-bea7-e689151190d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.678446729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c1d0e847-d24f-4a0d-bea7-e689151190d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.723506874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c594fbc-b389-41a1-bc6c-4f07af67449b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.723621370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c594fbc-b389-41a1-bc6c-4f07af67449b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.723819475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c594fbc-b389-41a1-bc6c-4f07af67449b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.762544087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c910b2f-c3f4-404a-8df8-703876268e58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.762635288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c910b2f-c3f4-404a-8df8-703876268e58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.762820337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c910b2f-c3f4-404a-8df8-703876268e58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.798994543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7dedf2b8-2be7-4bd7-ba1d-5040ea772dea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.799059257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7dedf2b8-2be7-4bd7-ba1d-5040ea772dea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.799246542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7dedf2b8-2be7-4bd7-ba1d-5040ea772dea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.840583141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=295f3ea5-97d1-4a33-bb72-335fe057d226 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.840651541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=295f3ea5-97d1-4a33-bb72-335fe057d226 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.840877455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=295f3ea5-97d1-4a33-bb72-335fe057d226 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.879904692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5ace48b-8263-45ca-b083-99fe846de00d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.880092278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5ace48b-8263-45ca-b083-99fe846de00d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.880687266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5ace48b-8263-45ca-b083-99fe846de00d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.920068069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1825e098-f538-4c77-a898-1ab92e6081a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.920187684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1825e098-f538-4c77-a898-1ab92e6081a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.920393310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1825e098-f538-4c77-a898-1ab92e6081a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.959223760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=584dcc0a-50a1-439d-b2f6-49499f805f33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.959372959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=584dcc0a-50a1-439d-b2f6-49499f805f33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:08:12 old-k8s-version-940806 crio[712]: time="2023-09-07 01:08:12.959544842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=584dcc0a-50a1-439d-b2f6-49499f805f33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	505fd87a59c43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   a22a0983e839b
	c16bcf217c95b       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   f45d4c026df49
	dcb0272fd2f33       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   3985a65b07e08
	6e0283355220b       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   712c8a64609b9
	9b851c02c8fdc       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   0fa66fd27dad8
	8e12829e3eb63       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   f4de0cb85a1d1
	00ea9e73f82d0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   22cc68c770b8a
	
	* 
	* ==> coredns [dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219] <==
	* .:53
	2023-09-07T00:57:56.709Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-07T00:57:56.709Z [INFO] CoreDNS-1.6.2
	2023-09-07T00:57:56.709Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-09-07T00:58:21.040Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	[INFO] Reloading complete
	2023-09-07T00:58:21.060Z [INFO] 127.0.0.1:42553 - 6572 "HINFO IN 7264912749835336230.1648971124024017391. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020135645s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-940806
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-940806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=old-k8s-version-940806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:07:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:07:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:07:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:07:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.245
	  Hostname:    old-k8s-version-940806
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 d1c883ac860c4cecba55236dd31e2013
	 System UUID:                d1c883ac-860c-4cec-ba55-236dd31e2013
	 Boot ID:                    4ad06931-8146-4d72-8fdb-ee1d1da21cbd
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-rvbpw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-940806                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                kube-apiserver-old-k8s-version-940806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                kube-controller-manager-old-k8s-version-940806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-proxy-bt454                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-940806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                metrics-server-74d5856cc6-bgjns                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-940806  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.520999] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158046] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.568740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.821645] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.132260] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.150284] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.115884] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.221809] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Sep 7 00:52] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +0.440443] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.318866] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.818834] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 7 00:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.645504] systemd-fstab-generator[3224]: Ignoring "noauto" for root device
	[Sep 7 00:58] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6] <==
	* 2023-09-07 00:57:30.993201 I | raft: ba939c90038af751 became follower at term 1
	2023-09-07 00:57:31.002432 W | auth: simple token is not cryptographically signed
	2023-09-07 00:57:31.009348 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-07 00:57:31.010664 I | etcdserver: ba939c90038af751 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-07 00:57:31.011122 I | etcdserver/membership: added member ba939c90038af751 [https://192.168.83.245:2380] to cluster 1b1c08270f79fa14
	2023-09-07 00:57:31.012746 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-07 00:57:31.013176 I | embed: listening for metrics on http://192.168.83.245:2381
	2023-09-07 00:57:31.013365 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-07 00:57:31.096479 I | raft: ba939c90038af751 is starting a new election at term 1
	2023-09-07 00:57:31.096584 I | raft: ba939c90038af751 became candidate at term 2
	2023-09-07 00:57:31.096670 I | raft: ba939c90038af751 received MsgVoteResp from ba939c90038af751 at term 2
	2023-09-07 00:57:31.096727 I | raft: ba939c90038af751 became leader at term 2
	2023-09-07 00:57:31.096750 I | raft: raft.node: ba939c90038af751 elected leader ba939c90038af751 at term 2
	2023-09-07 00:57:31.097337 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-07 00:57:31.097719 I | etcdserver: published {Name:old-k8s-version-940806 ClientURLs:[https://192.168.83.245:2379]} to cluster 1b1c08270f79fa14
	2023-09-07 00:57:31.097979 I | embed: ready to serve client requests
	2023-09-07 00:57:31.099224 I | embed: serving client requests on 192.168.83.245:2379
	2023-09-07 00:57:31.099430 I | embed: ready to serve client requests
	2023-09-07 00:57:31.100807 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-07 00:57:31.101466 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-07 00:57:31.101665 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-07 00:57:56.143108 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-940806\" " with result "range_response_count:1 size:4370" took too long (316.707587ms) to execute
	2023-09-07 00:57:56.634187 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (145.961145ms) to execute
	2023-09-07 01:07:31.142046 I | mvcc: store.index: compact 666
	2023-09-07 01:07:31.144308 I | mvcc: finished scheduled compaction at 666 (took 1.763277ms)
	
	* 
	* ==> kernel <==
	*  01:08:13 up 16 min,  0 users,  load average: 0.04, 0.15, 0.16
	Linux old-k8s-version-940806 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc] <==
	* I0907 01:00:58.349138       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:00:58.349289       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:00:58.349375       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:00:58.349386       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:02:35.377500       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:02:35.377614       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:02:35.377672       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:02:35.377720       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:03:35.378301       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:03:35.378428       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:03:35.378467       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:03:35.378478       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:05:35.379251       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:05:35.379458       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:05:35.379638       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:05:35.379655       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:07:35.381527       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:07:35.381690       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:07:35.381768       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:07:35.381776       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046] <==
	* E0907 01:01:56.810620       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:02:11.888414       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:02:27.062879       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:02:43.890742       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:02:57.315518       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:03:15.893370       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:03:27.568103       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:03:47.895137       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:03:57.820588       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:04:19.898274       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:04:28.072978       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:04:51.900385       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:04:58.325003       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:05:23.903236       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:05:28.577190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:05:55.905500       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:05:58.829313       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:06:27.908431       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:06:29.081499       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0907 01:06:59.333698       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:06:59.910795       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:07:29.586368       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:07:31.913024       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:07:59.838116       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:08:03.915296       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958] <==
	* W0907 00:57:57.740507       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0907 00:57:57.765234       1 node.go:135] Successfully retrieved node IP: 192.168.83.245
	I0907 00:57:57.765445       1 server_others.go:149] Using iptables Proxier.
	I0907 00:57:57.769177       1 server.go:529] Version: v1.16.0
	I0907 00:57:57.775835       1 config.go:131] Starting endpoints config controller
	I0907 00:57:57.775907       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0907 00:57:57.776137       1 config.go:313] Starting service config controller
	I0907 00:57:57.776169       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0907 00:57:57.891683       1 shared_informer.go:204] Caches are synced for service config 
	I0907 00:57:57.891880       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e] <==
	* W0907 00:57:34.393785       1 authentication.go:79] Authentication is disabled
	I0907 00:57:34.393804       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0907 00:57:34.397502       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0907 00:57:34.426244       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0907 00:57:34.427582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0907 00:57:34.432574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:57:34.432765       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:57:34.433707       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:57:34.433775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:57:34.434201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:57:34.434334       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:57:34.443826       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:34.444597       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:34.448161       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0907 00:57:35.429060       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0907 00:57:35.436058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0907 00:57:35.436254       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:57:35.443397       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:57:35.447419       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:57:35.448015       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:57:35.449405       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:57:35.452036       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:57:35.453809       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:35.454882       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:35.456617       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:46 UTC, ends at Thu 2023-09-07 01:08:13 UTC. --
	Sep 07 01:03:37 old-k8s-version-940806 kubelet[3230]: E0907 01:03:37.070614    3230 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:03:37 old-k8s-version-940806 kubelet[3230]: E0907 01:03:37.070683    3230 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:03:37 old-k8s-version-940806 kubelet[3230]: E0907 01:03:37.070714    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 07 01:03:49 old-k8s-version-940806 kubelet[3230]: E0907 01:03:49.054765    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:04:05 old-k8s-version-940806 kubelet[3230]: E0907 01:04:05.046316    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:04:19 old-k8s-version-940806 kubelet[3230]: E0907 01:04:19.047032    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:04:33 old-k8s-version-940806 kubelet[3230]: E0907 01:04:33.051201    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:04:46 old-k8s-version-940806 kubelet[3230]: E0907 01:04:46.047029    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:04:59 old-k8s-version-940806 kubelet[3230]: E0907 01:04:59.046145    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:05:10 old-k8s-version-940806 kubelet[3230]: E0907 01:05:10.047296    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:05:24 old-k8s-version-940806 kubelet[3230]: E0907 01:05:24.046291    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:05:36 old-k8s-version-940806 kubelet[3230]: E0907 01:05:36.046843    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:05:49 old-k8s-version-940806 kubelet[3230]: E0907 01:05:49.047288    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:04 old-k8s-version-940806 kubelet[3230]: E0907 01:06:04.047684    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:15 old-k8s-version-940806 kubelet[3230]: E0907 01:06:15.046640    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:28 old-k8s-version-940806 kubelet[3230]: E0907 01:06:28.055424    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:41 old-k8s-version-940806 kubelet[3230]: E0907 01:06:41.047079    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:56 old-k8s-version-940806 kubelet[3230]: E0907 01:06:56.046418    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:09 old-k8s-version-940806 kubelet[3230]: E0907 01:07:09.046507    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:20 old-k8s-version-940806 kubelet[3230]: E0907 01:07:20.046597    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:28 old-k8s-version-940806 kubelet[3230]: E0907 01:07:28.147036    3230 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 07 01:07:34 old-k8s-version-940806 kubelet[3230]: E0907 01:07:34.046605    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:46 old-k8s-version-940806 kubelet[3230]: E0907 01:07:46.046242    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:57 old-k8s-version-940806 kubelet[3230]: E0907 01:07:57.046389    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:08:11 old-k8s-version-940806 kubelet[3230]: E0907 01:08:11.046741    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248] <==
	* I0907 00:57:57.911712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:57:57.923115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:57:57.923199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:57:57.937471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:57:57.938554       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e!
	I0907 00:57:57.952853       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acca27ea-be6d-42da-a4af-f108a00ace8f", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e became leader
	I0907 00:57:58.040208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-940806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-bgjns
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns: exit status 1 (66.07447ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-bgjns" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (466.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546209 -n embed-certs-546209
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:12:53.666306305 +0000 UTC m=+5712.387761860
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-546209 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-546209 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.604µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-546209 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-546209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-546209 logs -n 25: (1.179360813s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 01:10 UTC | 07 Sep 23 01:11 UTC |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:11 UTC | 07 Sep 23 01:12 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-294457             | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-294457                  | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	| start   | -p auto-965889 --memory=3072                           | auto-965889                  | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 01:12:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 01:12:28.047459   52813 out.go:296] Setting OutFile to fd 1 ...
	I0907 01:12:28.047597   52813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:12:28.047605   52813 out.go:309] Setting ErrFile to fd 2...
	I0907 01:12:28.047609   52813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:12:28.047791   52813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 01:12:28.048335   52813 out.go:303] Setting JSON to false
	I0907 01:12:28.049288   52813 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6892,"bootTime":1694042256,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 01:12:28.049358   52813 start.go:138] virtualization: kvm guest
	I0907 01:12:28.051894   52813 out.go:177] * [auto-965889] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 01:12:28.053522   52813 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 01:12:28.053523   52813 notify.go:220] Checking for updates...
	I0907 01:12:28.055237   52813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 01:12:28.056844   52813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:12:28.058485   52813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:12:28.059844   52813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 01:12:28.061402   52813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 01:12:28.063173   52813 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:12:28.063289   52813 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:12:28.063410   52813 config.go:182] Loaded profile config "newest-cni-294457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:12:28.063514   52813 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 01:12:28.100151   52813 out.go:177] * Using the kvm2 driver based on user configuration
	I0907 01:12:28.101661   52813 start.go:298] selected driver: kvm2
	I0907 01:12:28.101679   52813 start.go:902] validating driver "kvm2" against <nil>
	I0907 01:12:28.101695   52813 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 01:12:28.102753   52813 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:12:28.102891   52813 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 01:12:28.118472   52813 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 01:12:28.118546   52813 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0907 01:12:28.118766   52813 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 01:12:28.118835   52813 cni.go:84] Creating CNI manager for ""
	I0907 01:12:28.118841   52813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 01:12:28.118847   52813 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0907 01:12:28.118856   52813 start_flags.go:321] config:
	{Name:auto-965889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:12:28.118998   52813 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:12:28.120838   52813 out.go:177] * Starting control plane node auto-965889 in cluster auto-965889
	I0907 01:12:27.831577   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:27.832132   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:27.832168   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:27.832112   52523 retry.go:31] will retry after 3.111421305s: waiting for machine to come up
	I0907 01:12:30.947376   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:30.947941   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:30.947967   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:30.947892   52523 retry.go:31] will retry after 4.449335994s: waiting for machine to come up
	I0907 01:12:28.122123   52813 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:12:28.122167   52813 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 01:12:28.122176   52813 cache.go:57] Caching tarball of preloaded images
	I0907 01:12:28.122247   52813 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 01:12:28.122258   52813 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 01:12:28.122367   52813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/auto-965889/config.json ...
	I0907 01:12:28.122386   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/auto-965889/config.json: {Name:mk25ae05a793b75c1930ed74e157956a19e3c750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:12:28.122539   52813 start.go:365] acquiring machines lock for auto-965889: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 01:12:35.399813   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.400352   52488 main.go:141] libmachine: (newest-cni-294457) Found IP for machine: 192.168.72.213
	I0907 01:12:35.400372   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has current primary IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.400379   52488 main.go:141] libmachine: (newest-cni-294457) Reserving static IP address...
	I0907 01:12:35.400899   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "newest-cni-294457", mac: "52:54:00:eb:20:af", ip: "192.168.72.213"} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.400921   52488 main.go:141] libmachine: (newest-cni-294457) Reserved static IP address: 192.168.72.213
	I0907 01:12:35.400933   52488 main.go:141] libmachine: (newest-cni-294457) DBG | skip adding static IP to network mk-newest-cni-294457 - found existing host DHCP lease matching {name: "newest-cni-294457", mac: "52:54:00:eb:20:af", ip: "192.168.72.213"}
	I0907 01:12:35.400945   52488 main.go:141] libmachine: (newest-cni-294457) DBG | Getting to WaitForSSH function...
	I0907 01:12:35.400954   52488 main.go:141] libmachine: (newest-cni-294457) Waiting for SSH to be available...
	I0907 01:12:35.403689   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.404100   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.404144   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.404272   52488 main.go:141] libmachine: (newest-cni-294457) DBG | Using SSH client type: external
	I0907 01:12:35.404312   52488 main.go:141] libmachine: (newest-cni-294457) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa (-rw-------)
	I0907 01:12:35.404366   52488 main.go:141] libmachine: (newest-cni-294457) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 01:12:35.404395   52488 main.go:141] libmachine: (newest-cni-294457) DBG | About to run SSH command:
	I0907 01:12:35.404408   52488 main.go:141] libmachine: (newest-cni-294457) DBG | exit 0
	I0907 01:12:35.494737   52488 main.go:141] libmachine: (newest-cni-294457) DBG | SSH cmd err, output: <nil>: 
	I0907 01:12:35.495139   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetConfigRaw
	I0907 01:12:35.495983   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetIP
	I0907 01:12:35.498273   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.498661   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.498692   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.498944   52488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/config.json ...
	I0907 01:12:35.499124   52488 machine.go:88] provisioning docker machine ...
	I0907 01:12:35.499142   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:35.499319   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetMachineName
	I0907 01:12:35.499486   52488 buildroot.go:166] provisioning hostname "newest-cni-294457"
	I0907 01:12:35.499509   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetMachineName
	I0907 01:12:35.499664   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:35.502172   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.502641   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.502677   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.502834   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:35.503022   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:35.503183   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:35.503334   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:35.503485   52488 main.go:141] libmachine: Using SSH client type: native
	I0907 01:12:35.503886   52488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0907 01:12:35.503899   52488 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-294457 && echo "newest-cni-294457" | sudo tee /etc/hostname
	I0907 01:12:35.640477   52488 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-294457
	
	I0907 01:12:35.640519   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:35.643082   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.643520   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.643545   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.643700   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:35.643873   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:35.643993   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:35.644199   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:35.644351   52488 main.go:141] libmachine: Using SSH client type: native
	I0907 01:12:35.644735   52488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0907 01:12:35.644762   52488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-294457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-294457/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-294457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 01:12:35.775942   52488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:12:35.775969   52488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 01:12:35.776008   52488 buildroot.go:174] setting up certificates
	I0907 01:12:35.776018   52488 provision.go:83] configureAuth start
	I0907 01:12:35.776031   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetMachineName
	I0907 01:12:35.776328   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetIP
	I0907 01:12:35.779121   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.779444   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.779497   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.779661   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:35.781981   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.782291   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.782326   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.782543   52488 provision.go:138] copyHostCerts
	I0907 01:12:35.782597   52488 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 01:12:35.782609   52488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 01:12:35.782669   52488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 01:12:35.782768   52488 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 01:12:35.782792   52488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 01:12:35.782836   52488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 01:12:35.782929   52488 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 01:12:35.782944   52488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 01:12:35.782971   52488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 01:12:35.783031   52488 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.newest-cni-294457 san=[192.168.72.213 192.168.72.213 localhost 127.0.0.1 minikube newest-cni-294457]
	I0907 01:12:35.944162   52488 provision.go:172] copyRemoteCerts
	I0907 01:12:35.944223   52488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 01:12:35.944250   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:35.947199   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.947596   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:35.947633   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:35.947842   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:35.948069   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:35.948224   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:35.948403   52488 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa Username:docker}
	I0907 01:12:36.039982   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 01:12:36.063065   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 01:12:36.695975   52813 start.go:369] acquired machines lock for "auto-965889" in 8.57330115s
	I0907 01:12:36.696088   52813 start.go:93] Provisioning new machine with config: &{Name:auto-965889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.1 ClusterName:auto-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:12:36.696229   52813 start.go:125] createHost starting for "" (driver="kvm2")
	I0907 01:12:36.698269   52813 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0907 01:12:36.698470   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:12:36.698525   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:12:36.717364   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0907 01:12:36.717776   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:12:36.718406   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:12:36.718433   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:12:36.718799   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:12:36.718985   52813 main.go:141] libmachine: (auto-965889) Calling .GetMachineName
	I0907 01:12:36.719127   52813 main.go:141] libmachine: (auto-965889) Calling .DriverName
	I0907 01:12:36.719272   52813 start.go:159] libmachine.API.Create for "auto-965889" (driver="kvm2")
	I0907 01:12:36.719304   52813 client.go:168] LocalClient.Create starting
	I0907 01:12:36.719348   52813 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 01:12:36.719382   52813 main.go:141] libmachine: Decoding PEM data...
	I0907 01:12:36.719401   52813 main.go:141] libmachine: Parsing certificate...
	I0907 01:12:36.719450   52813 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 01:12:36.719471   52813 main.go:141] libmachine: Decoding PEM data...
	I0907 01:12:36.719483   52813 main.go:141] libmachine: Parsing certificate...
	I0907 01:12:36.719505   52813 main.go:141] libmachine: Running pre-create checks...
	I0907 01:12:36.719514   52813 main.go:141] libmachine: (auto-965889) Calling .PreCreateCheck
	I0907 01:12:36.719827   52813 main.go:141] libmachine: (auto-965889) Calling .GetConfigRaw
	I0907 01:12:36.720255   52813 main.go:141] libmachine: Creating machine...
	I0907 01:12:36.720274   52813 main.go:141] libmachine: (auto-965889) Calling .Create
	I0907 01:12:36.720386   52813 main.go:141] libmachine: (auto-965889) Creating KVM machine...
	I0907 01:12:36.721629   52813 main.go:141] libmachine: (auto-965889) DBG | found existing default KVM network
	I0907 01:12:36.723103   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:36.722903   52854 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:f4:64} reservation:<nil>}
	I0907 01:12:36.723889   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:36.723792   52854 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:e3:92} reservation:<nil>}
	I0907 01:12:36.725077   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:36.724986   52854 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d8ec0}
	I0907 01:12:36.730414   52813 main.go:141] libmachine: (auto-965889) DBG | trying to create private KVM network mk-auto-965889 192.168.61.0/24...
	I0907 01:12:36.809039   52813 main.go:141] libmachine: (auto-965889) DBG | private KVM network mk-auto-965889 192.168.61.0/24 created
	I0907 01:12:36.809170   52813 main.go:141] libmachine: (auto-965889) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889 ...
	I0907 01:12:36.809218   52813 main.go:141] libmachine: (auto-965889) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 01:12:36.809236   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:36.809159   52854 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:12:36.809308   52813 main.go:141] libmachine: (auto-965889) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 01:12:37.029336   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:37.029206   52854 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/id_rsa...
	I0907 01:12:37.327242   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:37.327132   52854 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/auto-965889.rawdisk...
	I0907 01:12:37.327272   52813 main.go:141] libmachine: (auto-965889) DBG | Writing magic tar header
	I0907 01:12:37.327287   52813 main.go:141] libmachine: (auto-965889) DBG | Writing SSH key tar header
	I0907 01:12:37.327301   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:37.327259   52854 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889 ...
	I0907 01:12:37.327391   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889
	I0907 01:12:37.327421   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 01:12:37.327463   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889 (perms=drwx------)
	I0907 01:12:37.327481   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 01:12:37.327494   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:12:37.327541   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 01:12:37.327570   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 01:12:37.327585   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 01:12:37.327602   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home/jenkins
	I0907 01:12:37.327616   52813 main.go:141] libmachine: (auto-965889) DBG | Checking permissions on dir: /home
	I0907 01:12:37.327634   52813 main.go:141] libmachine: (auto-965889) DBG | Skipping /home - not owner
	I0907 01:12:37.327655   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 01:12:37.327672   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 01:12:37.327690   52813 main.go:141] libmachine: (auto-965889) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 01:12:37.327706   52813 main.go:141] libmachine: (auto-965889) Creating domain...
	I0907 01:12:37.328850   52813 main.go:141] libmachine: (auto-965889) define libvirt domain using xml: 
	I0907 01:12:37.328891   52813 main.go:141] libmachine: (auto-965889) <domain type='kvm'>
	I0907 01:12:37.328907   52813 main.go:141] libmachine: (auto-965889)   <name>auto-965889</name>
	I0907 01:12:37.328921   52813 main.go:141] libmachine: (auto-965889)   <memory unit='MiB'>3072</memory>
	I0907 01:12:37.328935   52813 main.go:141] libmachine: (auto-965889)   <vcpu>2</vcpu>
	I0907 01:12:37.328948   52813 main.go:141] libmachine: (auto-965889)   <features>
	I0907 01:12:37.328962   52813 main.go:141] libmachine: (auto-965889)     <acpi/>
	I0907 01:12:37.328973   52813 main.go:141] libmachine: (auto-965889)     <apic/>
	I0907 01:12:37.328997   52813 main.go:141] libmachine: (auto-965889)     <pae/>
	I0907 01:12:37.329009   52813 main.go:141] libmachine: (auto-965889)     
	I0907 01:12:37.329020   52813 main.go:141] libmachine: (auto-965889)   </features>
	I0907 01:12:37.329033   52813 main.go:141] libmachine: (auto-965889)   <cpu mode='host-passthrough'>
	I0907 01:12:37.329042   52813 main.go:141] libmachine: (auto-965889)   
	I0907 01:12:37.329055   52813 main.go:141] libmachine: (auto-965889)   </cpu>
	I0907 01:12:37.329069   52813 main.go:141] libmachine: (auto-965889)   <os>
	I0907 01:12:37.329081   52813 main.go:141] libmachine: (auto-965889)     <type>hvm</type>
	I0907 01:12:37.329095   52813 main.go:141] libmachine: (auto-965889)     <boot dev='cdrom'/>
	I0907 01:12:37.329108   52813 main.go:141] libmachine: (auto-965889)     <boot dev='hd'/>
	I0907 01:12:37.329140   52813 main.go:141] libmachine: (auto-965889)     <bootmenu enable='no'/>
	I0907 01:12:37.329163   52813 main.go:141] libmachine: (auto-965889)   </os>
	I0907 01:12:37.329176   52813 main.go:141] libmachine: (auto-965889)   <devices>
	I0907 01:12:37.329188   52813 main.go:141] libmachine: (auto-965889)     <disk type='file' device='cdrom'>
	I0907 01:12:37.329205   52813 main.go:141] libmachine: (auto-965889)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/boot2docker.iso'/>
	I0907 01:12:37.329216   52813 main.go:141] libmachine: (auto-965889)       <target dev='hdc' bus='scsi'/>
	I0907 01:12:37.329235   52813 main.go:141] libmachine: (auto-965889)       <readonly/>
	I0907 01:12:37.329249   52813 main.go:141] libmachine: (auto-965889)     </disk>
	I0907 01:12:37.329262   52813 main.go:141] libmachine: (auto-965889)     <disk type='file' device='disk'>
	I0907 01:12:37.329273   52813 main.go:141] libmachine: (auto-965889)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 01:12:37.329290   52813 main.go:141] libmachine: (auto-965889)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/auto-965889.rawdisk'/>
	I0907 01:12:37.329303   52813 main.go:141] libmachine: (auto-965889)       <target dev='hda' bus='virtio'/>
	I0907 01:12:37.329316   52813 main.go:141] libmachine: (auto-965889)     </disk>
	I0907 01:12:37.329332   52813 main.go:141] libmachine: (auto-965889)     <interface type='network'>
	I0907 01:12:37.329345   52813 main.go:141] libmachine: (auto-965889)       <source network='mk-auto-965889'/>
	I0907 01:12:37.329356   52813 main.go:141] libmachine: (auto-965889)       <model type='virtio'/>
	I0907 01:12:37.329369   52813 main.go:141] libmachine: (auto-965889)     </interface>
	I0907 01:12:37.329377   52813 main.go:141] libmachine: (auto-965889)     <interface type='network'>
	I0907 01:12:37.329390   52813 main.go:141] libmachine: (auto-965889)       <source network='default'/>
	I0907 01:12:37.329401   52813 main.go:141] libmachine: (auto-965889)       <model type='virtio'/>
	I0907 01:12:37.329414   52813 main.go:141] libmachine: (auto-965889)     </interface>
	I0907 01:12:37.329425   52813 main.go:141] libmachine: (auto-965889)     <serial type='pty'>
	I0907 01:12:37.329438   52813 main.go:141] libmachine: (auto-965889)       <target port='0'/>
	I0907 01:12:37.329449   52813 main.go:141] libmachine: (auto-965889)     </serial>
	I0907 01:12:37.329465   52813 main.go:141] libmachine: (auto-965889)     <console type='pty'>
	I0907 01:12:37.329477   52813 main.go:141] libmachine: (auto-965889)       <target type='serial' port='0'/>
	I0907 01:12:37.329487   52813 main.go:141] libmachine: (auto-965889)     </console>
	I0907 01:12:37.329509   52813 main.go:141] libmachine: (auto-965889)     <rng model='virtio'>
	I0907 01:12:37.329520   52813 main.go:141] libmachine: (auto-965889)       <backend model='random'>/dev/random</backend>
	I0907 01:12:37.329527   52813 main.go:141] libmachine: (auto-965889)     </rng>
	I0907 01:12:37.329536   52813 main.go:141] libmachine: (auto-965889)     
	I0907 01:12:37.329544   52813 main.go:141] libmachine: (auto-965889)     
	I0907 01:12:37.329553   52813 main.go:141] libmachine: (auto-965889)   </devices>
	I0907 01:12:37.329568   52813 main.go:141] libmachine: (auto-965889) </domain>
	I0907 01:12:37.329649   52813 main.go:141] libmachine: (auto-965889) 
	I0907 01:12:37.334008   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:13:ec:4c in network default
	I0907 01:12:37.334873   52813 main.go:141] libmachine: (auto-965889) Ensuring networks are active...
	I0907 01:12:37.334896   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:37.335682   52813 main.go:141] libmachine: (auto-965889) Ensuring network default is active
	I0907 01:12:37.336115   52813 main.go:141] libmachine: (auto-965889) Ensuring network mk-auto-965889 is active
	I0907 01:12:37.336643   52813 main.go:141] libmachine: (auto-965889) Getting domain xml...
	I0907 01:12:37.337387   52813 main.go:141] libmachine: (auto-965889) Creating domain...
	I0907 01:12:36.086605   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 01:12:36.109686   52488 provision.go:86] duration metric: configureAuth took 333.658511ms
	I0907 01:12:36.109708   52488 buildroot.go:189] setting minikube options for container-runtime
	I0907 01:12:36.109879   52488 config.go:182] Loaded profile config "newest-cni-294457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:12:36.109955   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:36.112638   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.112965   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.112988   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.113125   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:36.113324   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.113473   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.113585   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:36.113830   52488 main.go:141] libmachine: Using SSH client type: native
	I0907 01:12:36.114235   52488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0907 01:12:36.114251   52488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 01:12:36.434302   52488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 01:12:36.434330   52488 machine.go:91] provisioned docker machine in 935.193335ms
	I0907 01:12:36.434342   52488 start.go:300] post-start starting for "newest-cni-294457" (driver="kvm2")
	I0907 01:12:36.434354   52488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 01:12:36.434373   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:36.434716   52488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 01:12:36.434754   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:36.437794   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.438219   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.438250   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.438424   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:36.438628   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.438804   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:36.438987   52488 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa Username:docker}
	I0907 01:12:36.534308   52488 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 01:12:36.538474   52488 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 01:12:36.538498   52488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 01:12:36.538568   52488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 01:12:36.538637   52488 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 01:12:36.538717   52488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 01:12:36.547919   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:12:36.569858   52488 start.go:303] post-start completed in 135.502587ms
	I0907 01:12:36.569876   52488 fix.go:56] fixHost completed within 20.381647551s
	I0907 01:12:36.569895   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:36.572428   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.572757   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.572791   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.572973   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:36.573155   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.573322   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.573529   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:36.573675   52488 main.go:141] libmachine: Using SSH client type: native
	I0907 01:12:36.574063   52488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0907 01:12:36.574080   52488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 01:12:36.695678   52488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694049156.639840872
	
	I0907 01:12:36.695700   52488 fix.go:206] guest clock: 1694049156.639840872
	I0907 01:12:36.695710   52488 fix.go:219] Guest: 2023-09-07 01:12:36.639840872 +0000 UTC Remote: 2023-09-07 01:12:36.569879592 +0000 UTC m=+20.523586619 (delta=69.96128ms)
	I0907 01:12:36.695766   52488 fix.go:190] guest clock delta is within tolerance: 69.96128ms
	I0907 01:12:36.695774   52488 start.go:83] releasing machines lock for "newest-cni-294457", held for 20.507565219s
	I0907 01:12:36.695808   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:36.696124   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetIP
	I0907 01:12:36.699133   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.699468   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.699501   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.699613   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:36.700111   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:36.700288   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:36.700364   52488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 01:12:36.700404   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:36.700693   52488 ssh_runner.go:195] Run: cat /version.json
	I0907 01:12:36.700715   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHHostname
	I0907 01:12:36.703366   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.703708   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.703737   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.703759   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.703894   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:36.704079   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.704200   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:36.704221   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:36.704247   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:36.704366   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHPort
	I0907 01:12:36.704415   52488 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa Username:docker}
	I0907 01:12:36.704484   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHKeyPath
	I0907 01:12:36.704609   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetSSHUsername
	I0907 01:12:36.704740   52488 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/newest-cni-294457/id_rsa Username:docker}
	I0907 01:12:36.800264   52488 ssh_runner.go:195] Run: systemctl --version
	I0907 01:12:36.824760   52488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 01:12:36.976109   52488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 01:12:36.982834   52488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 01:12:36.982896   52488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 01:12:37.000383   52488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 01:12:37.000405   52488 start.go:466] detecting cgroup driver to use...
	I0907 01:12:37.000459   52488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 01:12:37.016987   52488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 01:12:37.030609   52488 docker.go:196] disabling cri-docker service (if available) ...
	I0907 01:12:37.030653   52488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 01:12:37.045077   52488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 01:12:37.060260   52488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 01:12:37.185692   52488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 01:12:37.318415   52488 docker.go:212] disabling docker service ...
	I0907 01:12:37.318512   52488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 01:12:37.334218   52488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 01:12:37.348053   52488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 01:12:37.475385   52488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 01:12:37.609025   52488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 01:12:37.623929   52488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 01:12:37.642849   52488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 01:12:37.642913   52488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:12:37.654082   52488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 01:12:37.654140   52488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:12:37.666042   52488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:12:37.676788   52488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:12:37.688261   52488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 01:12:37.699430   52488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 01:12:37.709487   52488 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 01:12:37.709532   52488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 01:12:37.724865   52488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 01:12:37.734306   52488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:12:37.855013   52488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 01:12:38.040959   52488 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 01:12:38.041022   52488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 01:12:38.048215   52488 start.go:534] Will wait 60s for crictl version
	I0907 01:12:38.048278   52488 ssh_runner.go:195] Run: which crictl
	I0907 01:12:38.052548   52488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 01:12:38.085768   52488 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 01:12:38.085845   52488 ssh_runner.go:195] Run: crio --version
	I0907 01:12:38.141547   52488 ssh_runner.go:195] Run: crio --version
	I0907 01:12:38.207918   52488 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 01:12:38.209487   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetIP
	I0907 01:12:38.213078   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:38.213447   52488 main.go:141] libmachine: (newest-cni-294457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:20:af", ip: ""} in network mk-newest-cni-294457: {Iface:virbr2 ExpiryTime:2023-09-07 02:12:29 +0000 UTC Type:0 Mac:52:54:00:eb:20:af Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:newest-cni-294457 Clientid:01:52:54:00:eb:20:af}
	I0907 01:12:38.213479   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined IP address 192.168.72.213 and MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:38.213721   52488 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0907 01:12:38.218081   52488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:12:38.232217   52488 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0907 01:12:38.233831   52488 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:12:38.233899   52488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:12:38.276357   52488 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 01:12:38.276484   52488 ssh_runner.go:195] Run: which lz4
	I0907 01:12:38.281645   52488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 01:12:38.285872   52488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 01:12:38.285948   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 01:12:40.248074   52488 crio.go:444] Took 1.966460 seconds to copy over tarball
	I0907 01:12:40.248172   52488 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 01:12:38.753648   52813 main.go:141] libmachine: (auto-965889) Waiting to get IP...
	I0907 01:12:38.754680   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:38.755168   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:38.755189   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:38.755151   52854 retry.go:31] will retry after 199.256502ms: waiting for machine to come up
	I0907 01:12:38.955807   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:38.956330   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:38.956366   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:38.956290   52854 retry.go:31] will retry after 361.561071ms: waiting for machine to come up
	I0907 01:12:39.320169   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:39.320724   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:39.320751   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:39.320682   52854 retry.go:31] will retry after 322.504297ms: waiting for machine to come up
	I0907 01:12:39.645001   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:39.645549   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:39.645575   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:39.645508   52854 retry.go:31] will retry after 417.736353ms: waiting for machine to come up
	I0907 01:12:40.064918   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:40.065428   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:40.065458   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:40.065390   52854 retry.go:31] will retry after 555.160889ms: waiting for machine to come up
	I0907 01:12:40.621654   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:40.622161   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:40.622237   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:40.622058   52854 retry.go:31] will retry after 766.952366ms: waiting for machine to come up
	I0907 01:12:41.391170   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:41.391726   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:41.391757   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:41.391675   52854 retry.go:31] will retry after 962.644719ms: waiting for machine to come up
	I0907 01:12:42.356466   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:42.356980   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:42.357008   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:42.356939   52854 retry.go:31] will retry after 993.621504ms: waiting for machine to come up
	I0907 01:12:43.431729   52488 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18352118s)
	I0907 01:12:43.431761   52488 crio.go:451] Took 3.183650 seconds to extract the tarball
	I0907 01:12:43.431771   52488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 01:12:43.479499   52488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:12:43.520962   52488 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 01:12:43.521000   52488 cache_images.go:84] Images are preloaded, skipping loading
	I0907 01:12:43.521073   52488 ssh_runner.go:195] Run: crio config
	I0907 01:12:43.585333   52488 cni.go:84] Creating CNI manager for ""
	I0907 01:12:43.585362   52488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 01:12:43.585384   52488 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0907 01:12:43.585407   52488 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-294457 NodeName:newest-cni-294457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 01:12:43.585573   52488 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-294457"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 01:12:43.585635   52488 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-294457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:newest-cni-294457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 01:12:43.585685   52488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 01:12:43.595605   52488 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 01:12:43.595676   52488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 01:12:43.604673   52488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0907 01:12:43.625262   52488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 01:12:43.643992   52488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0907 01:12:43.663338   52488 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0907 01:12:43.667685   52488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:12:43.681480   52488 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457 for IP: 192.168.72.213
	I0907 01:12:43.681538   52488 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:12:43.681706   52488 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 01:12:43.681783   52488 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 01:12:43.681868   52488 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/client.key
	I0907 01:12:43.681952   52488 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/apiserver.key.0982f839
	I0907 01:12:43.682009   52488 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/proxy-client.key
	I0907 01:12:43.682138   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 01:12:43.682177   52488 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 01:12:43.682193   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 01:12:43.682230   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 01:12:43.682263   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 01:12:43.682309   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 01:12:43.682367   52488 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:12:43.683036   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 01:12:43.709850   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 01:12:43.733969   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 01:12:43.759850   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 01:12:43.784019   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 01:12:43.811374   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 01:12:43.840433   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 01:12:43.865374   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 01:12:43.891748   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 01:12:43.918625   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 01:12:43.944901   52488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 01:12:43.971251   52488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 01:12:43.988337   52488 ssh_runner.go:195] Run: openssl version
	I0907 01:12:43.994300   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 01:12:44.007906   52488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 01:12:44.014265   52488 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 01:12:44.014324   52488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 01:12:44.020263   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 01:12:44.030276   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 01:12:44.040386   52488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 01:12:44.045345   52488 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 01:12:44.045409   52488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 01:12:44.051251   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 01:12:44.061433   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 01:12:44.071311   52488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:12:44.076106   52488 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:12:44.076163   52488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:12:44.081842   52488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 01:12:44.091987   52488 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 01:12:44.096834   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 01:12:44.104444   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 01:12:44.112011   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 01:12:44.119306   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 01:12:44.126144   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 01:12:44.132131   52488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 01:12:44.137749   52488 kubeadm.go:404] StartCluster: {Name:newest-cni-294457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:newest-cni-294457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:12:44.137827   52488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 01:12:44.137872   52488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 01:12:44.177384   52488 cri.go:89] found id: ""
	I0907 01:12:44.177460   52488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 01:12:44.188340   52488 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 01:12:44.188361   52488 kubeadm.go:636] restartCluster start
	I0907 01:12:44.188402   52488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 01:12:44.198233   52488 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:44.281760   52488 kubeconfig.go:135] verify returned: extract IP: "newest-cni-294457" does not appear in /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:12:44.282289   52488 kubeconfig.go:146] "newest-cni-294457" context is missing from /home/jenkins/minikube-integration/17174-6470/kubeconfig - will repair!
	I0907 01:12:44.283342   52488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:12:44.285935   52488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 01:12:44.296821   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:44.296874   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:44.309282   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:44.309303   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:44.309351   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:44.323136   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:44.823883   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:44.823971   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:44.841117   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:45.323634   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:45.323725   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:45.340148   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:45.823727   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:45.823791   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:45.837211   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:43.352173   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:43.352679   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:43.352701   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:43.352624   52854 retry.go:31] will retry after 1.36222385s: waiting for machine to come up
	I0907 01:12:44.716051   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:44.716647   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:44.716677   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:44.716614   52854 retry.go:31] will retry after 2.261524856s: waiting for machine to come up
	I0907 01:12:46.979503   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:46.980004   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:46.980030   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:46.979919   52854 retry.go:31] will retry after 2.751487872s: waiting for machine to come up
	I0907 01:12:46.324194   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:46.324282   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:46.337300   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:46.823972   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:46.824087   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:46.838373   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:47.323920   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:47.324015   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:47.340772   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:47.824146   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:47.824232   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:47.841427   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:48.323958   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:48.324041   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:48.339408   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:48.823982   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:48.824068   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:48.836795   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:49.323304   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:49.323399   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:49.337033   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:49.823522   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:49.823618   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:49.839836   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:50.323317   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:50.323393   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:50.336232   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:50.823521   52488 api_server.go:166] Checking apiserver status ...
	I0907 01:12:50.823585   52488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 01:12:50.836136   52488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 01:12:49.733057   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:49.733645   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:49.733692   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:49.733588   52854 retry.go:31] will retry after 2.470653545s: waiting for machine to come up
	I0907 01:12:52.205922   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:12:52.206365   52813 main.go:141] libmachine: (auto-965889) DBG | unable to find current IP address of domain auto-965889 in network mk-auto-965889
	I0907 01:12:52.206392   52813 main.go:141] libmachine: (auto-965889) DBG | I0907 01:12:52.206331   52854 retry.go:31] will retry after 3.01122642s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:03 UTC, ends at Thu 2023-09-07 01:12:54 UTC. --
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.290054464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4eeecff8-b0ee-43a0-97b0-6279a055be65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.290128835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4eeecff8-b0ee-43a0-97b0-6279a055be65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.290447827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4eeecff8-b0ee-43a0-97b0-6279a055be65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.338972756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c0594cd-d9d9-4b72-ad38-2168fbd6a53f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.339059139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c0594cd-d9d9-4b72-ad38-2168fbd6a53f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.339497438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c0594cd-d9d9-4b72-ad38-2168fbd6a53f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.351985032Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=1d95154c-8fed-44cf-8422-caee110f5b16 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.352579475Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:76f7d42f-7e32-4112-ae4e-053d2addea0e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047905443914447,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452192998Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-vrgm9,Uid:0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047905426958
726,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452179547Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64eed4451fc0d34740deed321020ec57f4174965aff7ed8423f7fb03d465ae40,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-d7nxw,Uid:92e557f4-3c56-49f4-931c-0e64fa3cb1df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047902130376620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-d7nxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e557f4-3c56-49f4-931c-0e64fa3cb1df,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.
452194610Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&PodSandboxMetadata{Name:kube-proxy-47255,Uid:6e6b85b5-8bdd-4d0d-8424-1e7276b263c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047897820875447,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8bdd-4d0d-8424-1e7276b263c0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452191384Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a741bf5a-bd74-49af-9920-2ba0a36a5d01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047897793384875,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2023-09-07T00:51:37.452196127Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-546209,Uid:b1979c4dd3710ffc9cb52ec45088be1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890990415039,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b1979c4dd3710ffc9cb52ec45088be1c,kubernetes.io/config.seen: 2023-09-07T00:51:30.440359413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-546209,Uid:8f2889dcd6a70e7e8153b7a3aa9cdabc,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890983473271,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.242:2379,kubernetes.io/config.hash: 8f2889dcd6a70e7e8153b7a3aa9cdabc,kubernetes.io/config.seen: 2023-09-07T00:51:30.440350727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-546209,Uid:fee6a8067067539cf9cecf4ba53dd6b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890978564665,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.242:8443,kubernetes.io/config.hash: fee6a8067067539cf9cecf4ba53dd6b4,kubernetes.io/config.seen: 2023-09-07T00:51:30.440356520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-546209,Uid:6724d84bba3d4ea71b357127cdd9eef3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890939133942,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6724d84bba3d4ea71b357127cd
d9eef3,kubernetes.io/config.seen: 2023-09-07T00:51:30.440358229Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1d95154c-8fed-44cf-8422-caee110f5b16 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.353540178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f895682-c532-4922-8c1f-bbf0ac187f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.353707903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f895682-c532-4922-8c1f-bbf0ac187f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.353979689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f895682-c532-4922-8c1f-bbf0ac187f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.381993783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e5f730e-dd71-481b-a0a4-85f453971776 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.382087586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1e5f730e-dd71-481b-a0a4-85f453971776 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.382287226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1e5f730e-dd71-481b-a0a4-85f453971776 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.426229699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ba369c2-e458-4676-890b-d9823e4150d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.426350158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ba369c2-e458-4676-890b-d9823e4150d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.426616261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ba369c2-e458-4676-890b-d9823e4150d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.467471017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e182ec0e-3b04-4f31-b201-13cdd4eef6e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.467560626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e182ec0e-3b04-4f31-b201-13cdd4eef6e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.467927399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047898706723459,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8b
dd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[string
]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{io
.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Annota
tions:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:map[
string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e182ec0e-3b04-4f31-b201-13cdd4eef6e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.487255332Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28d733b8-e9ec-4f07-9c9d-f8bf1b9c4a7c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.487619811Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:76f7d42f-7e32-4112-ae4e-053d2addea0e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047905443914447,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452192998Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-vrgm9,Uid:0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047905426958
726,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452179547Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64eed4451fc0d34740deed321020ec57f4174965aff7ed8423f7fb03d465ae40,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-d7nxw,Uid:92e557f4-3c56-49f4-931c-0e64fa3cb1df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047902130376620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-d7nxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e557f4-3c56-49f4-931c-0e64fa3cb1df,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.
452194610Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&PodSandboxMetadata{Name:kube-proxy-47255,Uid:6e6b85b5-8bdd-4d0d-8424-1e7276b263c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047897820875447,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-8bdd-4d0d-8424-1e7276b263c0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:37.452191384Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a741bf5a-bd74-49af-9920-2ba0a36a5d01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047897793384875,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2023-09-07T00:51:37.452196127Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-546209,Uid:b1979c4dd3710ffc9cb52ec45088be1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890990415039,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b1979c4dd3710ffc9cb52ec45088be1c,kubernetes.io/config.seen: 2023-09-07T00:51:30.440359413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-546209,Uid:8f2889dcd6a70e7e8153b7a3aa9cdabc,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890983473271,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.242:2379,kubernetes.io/config.hash: 8f2889dcd6a70e7e8153b7a3aa9cdabc,kubernetes.io/config.seen: 2023-09-07T00:51:30.440350727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-546209,Uid:fee6a8067067539cf9cecf4ba53dd6b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890978564665,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.242:8443,kubernetes.io/config.hash: fee6a8067067539cf9cecf4ba53dd6b4,kubernetes.io/config.seen: 2023-09-07T00:51:30.440356520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-546209,Uid:6724d84bba3d4ea71b357127cdd9eef3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047890939133942,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6724d84bba3d4ea71b357127cd
d9eef3,kubernetes.io/config.seen: 2023-09-07T00:51:30.440358229Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=28d733b8-e9ec-4f07-9c9d-f8bf1b9c4a7c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.489367467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d74bf833-a2f3-4324-95fd-1cb50303f2e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.489471532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d74bf833-a2f3-4324-95fd-1cb50303f2e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:54 embed-certs-546209 crio[728]: time="2023-09-07 01:12:54.489800362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71,PodSandboxId:e41590ec7641e0013696282aa230cef22f5af28c746f817d7da9833c4b73474e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047929756917381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a741bf5a-bd74-49af-9920-2ba0a36a5d01,},Annotations:map[string]string{io.kubernetes.container.hash: af2fc136,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7e689ee42b1c4beb22ba6ccc53fb49003437561aa4d0b92d555ccadca9c4c1,PodSandboxId:7548386602c352f3e07fcd514bdaacb37e81df91468ed91ef1bee36287c18ab7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047909508528893,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76f7d42f-7e32-4112-ae4e-053d2addea0e,},Annotations:map[string]string{io.kubernetes.container.hash: b083096e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc,PodSandboxId:e59f871b4b994aa3d681572a5f9037377fba7f56ff159e62e73fdb835869d16a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047906148859712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrgm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9,},Annotations:map[string]string{io.kubernetes.container.hash: 924abe91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3,PodSandboxId:8f9b0f503434ddcd730473ad9eb990519c4a8789d87c5dbd7065405d8dfd6976,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047898602894349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47255,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e6b85b5-
8bdd-4d0d-8424-1e7276b263c0,},Annotations:map[string]string{io.kubernetes.container.hash: eab8781e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0,PodSandboxId:1a8caaf07d65b2fad0d5f207ca0e07afbf5382cc1135d5403a14fbd10ae67b3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047892252736453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f2889dcd6a70e7e8153b7a3aa9cdabc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7eb91404,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213,PodSandboxId:1c914348c6421e24c82b2f16aed16df4f77e6f7ec08f73329ed5aaafda9bb1f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047892374812101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1979c4dd3710ffc9cb52ec45088be1c,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168,PodSandboxId:f6811bf1cfb84cffe1459c90dfcec634f4c52afb7b3e3e245ce817430bff263d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047891651971699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6724d84bba3d4ea71b357127cdd9eef3,},Anno
tations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c,PodSandboxId:ec88ecc5dab6cf73ad86bcd943803fd6f98b22b4dd78b58f437202c4c90ffc14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047891567980924,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-546209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee6a8067067539cf9cecf4ba53dd6b4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e4b9229e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d74bf833-a2f3-4324-95fd-1cb50303f2e1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	3e19fc62694d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   e41590ec7641e
	db7e689ee42b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   7548386602c35
	855a29ec437be       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   e59f871b4b994
	9094ebc4a03d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   e41590ec7641e
	6af4cd8e3e587       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      21 minutes ago      Running             kube-proxy                1                   8f9b0f503434d
	9177fe24226fe       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      21 minutes ago      Running             kube-scheduler            1                   1c914348c6421
	3fee1540272d1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   1a8caaf07d65b
	22bdcb2b7b02d       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      21 minutes ago      Running             kube-controller-manager   1                   f6811bf1cfb84
	3bfeea0ca797b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      21 minutes ago      Running             kube-apiserver            1                   ec88ecc5dab6c
	
	* 
	* ==> coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60886 - 8531 "HINFO IN 2783813726071619599.7588099067166792090. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015187174s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-546209
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-546209
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=embed-certs-546209
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_43_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:43:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-546209
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:12:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:12:31 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:12:31 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:12:31 +0000   Thu, 07 Sep 2023 00:43:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:12:31 +0000   Thu, 07 Sep 2023 00:51:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.242
	  Hostname:    embed-certs-546209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 63417a3b59c148f19ad0029f51d9917d
	  System UUID:                63417a3b-59c1-48f1-9ad0-029f51d9917d
	  Boot ID:                    fea4dfc2-0ceb-4ce4-9108-1c291a715af7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-vrgm9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-546209                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-546209             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-546209    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-47255                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-546209             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-d7nxw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node embed-certs-546209 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-546209 event: Registered Node embed-certs-546209 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-546209 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-546209 event: Registered Node embed-certs-546209 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.416100] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep 7 00:51] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153289] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.562531] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.754231] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.103119] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.163550] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.124921] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.237307] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +17.334121] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +15.381252] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] <==
	* {"level":"info","ts":"2023-09-07T01:06:35.013411Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1070,"took":"1.136552ms","hash":1370647953}
	{"level":"info","ts":"2023-09-07T01:06:35.01354Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1370647953,"revision":1070,"compact-revision":827}
	{"level":"info","ts":"2023-09-07T01:10:29.455602Z","caller":"traceutil/trace.go:171","msg":"trace[804403156] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"117.467669ms","start":"2023-09-07T01:10:29.338114Z","end":"2023-09-07T01:10:29.455581Z","steps":["trace[804403156] 'process raft request'  (duration: 117.289376ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:11:34.338053Z","caller":"traceutil/trace.go:171","msg":"trace[83126118] linearizableReadLoop","detail":"{readStateIndex:1830; appliedIndex:1829; }","duration":"178.449657ms","start":"2023-09-07T01:11:34.159577Z","end":"2023-09-07T01:11:34.338027Z","steps":["trace[83126118] 'read index received'  (duration: 159.465271ms)","trace[83126118] 'applied index is now lower than readState.Index'  (duration: 18.983154ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:11:34.338418Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.821936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:11:34.338453Z","caller":"traceutil/trace.go:171","msg":"trace[553830883] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1554; }","duration":"178.942596ms","start":"2023-09-07T01:11:34.1595Z","end":"2023-09-07T01:11:34.338442Z","steps":["trace[553830883] 'agreement among raft nodes before linearized reading'  (duration: 178.792239ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:11:34.338215Z","caller":"traceutil/trace.go:171","msg":"trace[573219051] transaction","detail":"{read_only:false; response_revision:1554; number_of_response:1; }","duration":"435.254285ms","start":"2023-09-07T01:11:33.902934Z","end":"2023-09-07T01:11:34.338188Z","steps":["trace[573219051] 'process raft request'  (duration: 416.21212ms)","trace[573219051] 'compare'  (duration: 18.73345ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:11:34.338798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:11:33.902919Z","time spent":"435.700549ms","remote":"127.0.0.1:59510","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-546209\" mod_revision:1546 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-546209\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-546209\" > >"}
	{"level":"warn","ts":"2023-09-07T01:11:34.338828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.709198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:611"}
	{"level":"info","ts":"2023-09-07T01:11:34.341145Z","caller":"traceutil/trace.go:171","msg":"trace[607019909] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1554; }","duration":"105.027698ms","start":"2023-09-07T01:11:34.236104Z","end":"2023-09-07T01:11:34.341132Z","steps":["trace[607019909] 'agreement among raft nodes before linearized reading'  (duration: 102.663131ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:11:34.646307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.666471ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10438933187575660742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1553 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-07T01:11:34.64642Z","caller":"traceutil/trace.go:171","msg":"trace[720510910] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"300.99836ms","start":"2023-09-07T01:11:34.345405Z","end":"2023-09-07T01:11:34.646404Z","steps":["trace[720510910] 'process raft request'  (duration: 155.155061ms)","trace[720510910] 'compare'  (duration: 145.374196ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:11:34.646486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:11:34.345389Z","time spent":"301.063107ms","remote":"127.0.0.1:59488","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":595,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1553 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-07T01:11:35.384491Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1313}
	{"level":"warn","ts":"2023-09-07T01:11:35.385437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.14443ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10438933187575660744 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:1313 > ","response":"size:5"}
	{"level":"info","ts":"2023-09-07T01:11:35.38554Z","caller":"traceutil/trace.go:171","msg":"trace[998422460] linearizableReadLoop","detail":"{readStateIndex:1833; appliedIndex:1832; }","duration":"226.937137ms","start":"2023-09-07T01:11:35.158589Z","end":"2023-09-07T01:11:35.385526Z","steps":["trace[998422460] 'read index received'  (duration: 74.121315ms)","trace[998422460] 'applied index is now lower than readState.Index'  (duration: 152.814236ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-07T01:11:35.385616Z","caller":"traceutil/trace.go:171","msg":"trace[582373524] compact","detail":"{revision:1313; response_revision:1556; }","duration":"281.528475ms","start":"2023-09-07T01:11:35.104081Z","end":"2023-09-07T01:11:35.38561Z","steps":["trace[582373524] 'process raft request'  (duration: 128.668395ms)","trace[582373524] 'check and update compact revision'  (duration: 150.983676ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:11:35.386024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.446785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:11:35.386094Z","caller":"traceutil/trace.go:171","msg":"trace[895418795] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1556; }","duration":"227.524545ms","start":"2023-09-07T01:11:35.158558Z","end":"2023-09-07T01:11:35.386083Z","steps":["trace[895418795] 'agreement among raft nodes before linearized reading'  (duration: 227.413368ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:11:35.386305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.030052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-07T01:11:35.38634Z","caller":"traceutil/trace.go:171","msg":"trace[901312346] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:1556; }","duration":"160.097344ms","start":"2023-09-07T01:11:35.226227Z","end":"2023-09-07T01:11:35.386324Z","steps":["trace[901312346] 'agreement among raft nodes before linearized reading'  (duration: 160.023818ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:11:35.388504Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1313,"took":"3.491159ms","hash":4081730599}
	{"level":"info","ts":"2023-09-07T01:11:35.388568Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4081730599,"revision":1313,"compact-revision":1070}
	{"level":"warn","ts":"2023-09-07T01:12:45.334938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.604911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:12:45.335305Z","caller":"traceutil/trace.go:171","msg":"trace[785772305] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1613; }","duration":"175.982194ms","start":"2023-09-07T01:12:45.159285Z","end":"2023-09-07T01:12:45.335268Z","steps":["trace[785772305] 'range keys from in-memory index tree'  (duration: 175.368941ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:12:54 up 22 min,  0 users,  load average: 0.21, 0.20, 0.21
	Linux embed-certs-546209 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] <==
	* I0907 01:10:36.973493       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:10:36.973554       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0907 01:11:36.973979       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:11:36.974207       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:11:37.186363       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:37.186556       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:37.187085       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:11:37.187142       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:11:38.187083       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:38.187254       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:38.187264       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:11:38.187388       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:38.187456       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:11:38.188693       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:12:36.973095       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.132.56:443: connect: connection refused
	I0907 01:12:36.973343       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:12:38.188339       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:38.188539       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:12:38.188578       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:12:38.189595       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:38.189747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:12:38.189775       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] <==
	* E0907 01:07:50.224200       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:07:50.507700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="323.524µs"
	I0907 01:07:50.773289       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:08:05.502236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="126.112µs"
	E0907 01:08:20.232324       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:08:20.783795       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:08:50.240878       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:08:50.793848       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:20.246732       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:20.802743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:50.255921       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:50.810853       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:20.262403       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:20.820391       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:50.267871       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:50.831100       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:20.273538       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:20.839901       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:50.279332       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:50.848107       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:12:20.285580       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:12:20.857011       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:12:50.292384       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:12:50.867882       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:12:53.510447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="228.758µs"
	
	* 
	* ==> kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] <==
	* I0907 00:51:39.087480       1 server_others.go:69] "Using iptables proxy"
	I0907 00:51:39.185737       1 node.go:141] Successfully retrieved node IP: 192.168.50.242
	I0907 00:51:39.246818       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:51:39.246936       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:51:39.251491       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:51:39.251613       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:51:39.252587       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:51:39.252747       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:39.253869       1 config.go:188] "Starting service config controller"
	I0907 00:51:39.254010       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:51:39.254047       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:51:39.254064       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:51:39.254568       1 config.go:315] "Starting node config controller"
	I0907 00:51:39.254606       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:51:39.354931       1 shared_informer.go:318] Caches are synced for node config
	I0907 00:51:39.355127       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:51:39.355213       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] <==
	* I0907 00:51:34.688500       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:51:37.096877       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:51:37.097090       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:51:37.100221       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:51:37.100244       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:51:37.180276       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:51:37.180340       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:37.187145       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:51:37.187196       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:51:37.190428       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:51:37.190521       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:51:37.288160       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:03 UTC, ends at Thu 2023-09-07 01:12:55 UTC. --
	Sep 07 01:10:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:10:31 embed-certs-546209 kubelet[935]: E0907 01:10:31.486602     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:10:43 embed-certs-546209 kubelet[935]: E0907 01:10:43.486892     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:10:58 embed-certs-546209 kubelet[935]: E0907 01:10:58.488618     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:11:09 embed-certs-546209 kubelet[935]: E0907 01:11:09.487404     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:11:21 embed-certs-546209 kubelet[935]: E0907 01:11:21.486346     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:11:30 embed-certs-546209 kubelet[935]: E0907 01:11:30.498406     935 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 07 01:11:30 embed-certs-546209 kubelet[935]: E0907 01:11:30.524204     935 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:11:30 embed-certs-546209 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:11:30 embed-certs-546209 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:11:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:11:33 embed-certs-546209 kubelet[935]: E0907 01:11:33.487223     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:11:48 embed-certs-546209 kubelet[935]: E0907 01:11:48.489966     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:12:00 embed-certs-546209 kubelet[935]: E0907 01:12:00.489088     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:12:15 embed-certs-546209 kubelet[935]: E0907 01:12:15.487597     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:12:29 embed-certs-546209 kubelet[935]: E0907 01:12:29.489195     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:12:30 embed-certs-546209 kubelet[935]: E0907 01:12:30.517493     935 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:12:30 embed-certs-546209 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:12:30 embed-certs-546209 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:12:30 embed-certs-546209 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:12:42 embed-certs-546209 kubelet[935]: E0907 01:12:42.513797     935 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 07 01:12:42 embed-certs-546209 kubelet[935]: E0907 01:12:42.513872     935 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 07 01:12:42 embed-certs-546209 kubelet[935]: E0907 01:12:42.514109     935 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z55c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-d7nxw_kube-system(92e557f4-3c56-49f4-931c-0e64fa3cb1df): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:12:42 embed-certs-546209 kubelet[935]: E0907 01:12:42.514141     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	Sep 07 01:12:53 embed-certs-546209 kubelet[935]: E0907 01:12:53.487951     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d7nxw" podUID="92e557f4-3c56-49f4-931c-0e64fa3cb1df"
	
	* 
	* ==> storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] <==
	* I0907 00:52:09.890019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:52:09.904609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:52:09.904784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:52:09.921291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:52:09.921520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744!
	I0907 00:52:09.924209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12604b25-c97b-477b-a25c-0fcb9eaf879f", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744 became leader
	I0907 00:52:10.021966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-546209_9a1fbf15-7ae6-4bc3-9626-cf7a3ee36744!
	
	* 
	* ==> storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] <==
	* I0907 00:51:39.141495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0907 00:52:09.165406       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-546209 -n embed-certs-546209
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-546209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-d7nxw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw: exit status 1 (78.413358ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d7nxw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-546209 describe pod metrics-server-57f55c9bc5-d7nxw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (466.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:14:28.520368082 +0000 UTC m=+5807.241823631
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-773466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (100.79118ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-773466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-773466 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-773466 logs -n 25: (1.464152483s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 01:10 UTC | 07 Sep 23 01:11 UTC |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:11 UTC | 07 Sep 23 01:12 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-294457             | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-294457                  | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	| start   | -p auto-965889 --memory=3072                           | auto-965889                  | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:14 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	| start   | -p kindnet-965889                                      | kindnet-965889               | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:14 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-294457 sudo                              | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC | 07 Sep 23 01:13 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC | 07 Sep 23 01:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC | 07 Sep 23 01:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC | 07 Sep 23 01:13 UTC |
	| delete  | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC | 07 Sep 23 01:13 UTC |
	| start   | -p calico-965889 --memory=3072                         | calico-965889                | jenkins | v1.31.2 | 07 Sep 23 01:13 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-965889 pgrep -a                                | auto-965889                  | jenkins | v1.31.2 | 07 Sep 23 01:14 UTC | 07 Sep 23 01:14 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-965889 pgrep -a                             | kindnet-965889               | jenkins | v1.31.2 | 07 Sep 23 01:14 UTC | 07 Sep 23 01:14 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 01:13:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 01:13:15.408643   53935 out.go:296] Setting OutFile to fd 1 ...
	I0907 01:13:15.408770   53935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:13:15.408780   53935 out.go:309] Setting ErrFile to fd 2...
	I0907 01:13:15.408787   53935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:13:15.409071   53935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 01:13:15.409779   53935 out.go:303] Setting JSON to false
	I0907 01:13:15.411072   53935 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6940,"bootTime":1694042256,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 01:13:15.411186   53935 start.go:138] virtualization: kvm guest
	I0907 01:13:15.413576   53935 out.go:177] * [calico-965889] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 01:13:15.415074   53935 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 01:13:15.415087   53935 notify.go:220] Checking for updates...
	I0907 01:13:15.416608   53935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 01:13:15.418167   53935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:13:15.419583   53935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:13:15.421056   53935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 01:13:15.422339   53935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 01:13:15.424086   53935 config.go:182] Loaded profile config "auto-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:15.424223   53935 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:15.424352   53935 config.go:182] Loaded profile config "kindnet-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:15.424473   53935 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 01:13:15.467674   53935 out.go:177] * Using the kvm2 driver based on user configuration
	I0907 01:13:15.469396   53935 start.go:298] selected driver: kvm2
	I0907 01:13:15.469415   53935 start.go:902] validating driver "kvm2" against <nil>
	I0907 01:13:15.469429   53935 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 01:13:15.470167   53935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:13:15.470247   53935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 01:13:15.486249   53935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 01:13:15.486311   53935 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0907 01:13:15.486620   53935 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 01:13:15.486659   53935 cni.go:84] Creating CNI manager for "calico"
	I0907 01:13:15.486670   53935 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0907 01:13:15.486678   53935 start_flags.go:321] config:
	{Name:calico-965889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:13:15.486842   53935 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:13:15.489496   53935 out.go:177] * Starting control plane node calico-965889 in cluster calico-965889
	I0907 01:13:14.163686   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:14.172261   53267 main.go:141] libmachine: (kindnet-965889) DBG | unable to find current IP address of domain kindnet-965889 in network mk-kindnet-965889
	I0907 01:13:14.172292   53267 main.go:141] libmachine: (kindnet-965889) DBG | I0907 01:13:14.172167   53328 retry.go:31] will retry after 2.914315535s: waiting for machine to come up
	I0907 01:13:15.490738   53935 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:13:15.490793   53935 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 01:13:15.490806   53935 cache.go:57] Caching tarball of preloaded images
	I0907 01:13:15.490885   53935 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 01:13:15.490897   53935 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 01:13:15.491011   53935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/config.json ...
	I0907 01:13:15.491024   53935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/config.json: {Name:mk3c60609d276a6826ad1b4aa0247f6512db0141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:15.491139   53935 start.go:365] acquiring machines lock for calico-965889: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 01:13:17.087876   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:17.088396   53267 main.go:141] libmachine: (kindnet-965889) DBG | unable to find current IP address of domain kindnet-965889 in network mk-kindnet-965889
	I0907 01:13:17.088425   53267 main.go:141] libmachine: (kindnet-965889) DBG | I0907 01:13:17.088347   53328 retry.go:31] will retry after 3.026355734s: waiting for machine to come up
	I0907 01:13:20.115937   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:20.116593   53267 main.go:141] libmachine: (kindnet-965889) DBG | unable to find current IP address of domain kindnet-965889 in network mk-kindnet-965889
	I0907 01:13:20.116628   53267 main.go:141] libmachine: (kindnet-965889) DBG | I0907 01:13:20.116541   53328 retry.go:31] will retry after 3.846429345s: waiting for machine to come up
	I0907 01:13:22.360855   52813 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 01:13:22.360964   52813 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 01:13:22.361061   52813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 01:13:22.361177   52813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 01:13:22.361298   52813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 01:13:22.361386   52813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 01:13:22.363182   52813 out.go:204]   - Generating certificates and keys ...
	I0907 01:13:22.363267   52813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 01:13:22.363343   52813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 01:13:22.363427   52813 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 01:13:22.363524   52813 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0907 01:13:22.363623   52813 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0907 01:13:22.363690   52813 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0907 01:13:22.363762   52813 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0907 01:13:22.363922   52813 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-965889 localhost] and IPs [192.168.61.35 127.0.0.1 ::1]
	I0907 01:13:22.364008   52813 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0907 01:13:22.364180   52813 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-965889 localhost] and IPs [192.168.61.35 127.0.0.1 ::1]
	I0907 01:13:22.364285   52813 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 01:13:22.364391   52813 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 01:13:22.364461   52813 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0907 01:13:22.364550   52813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 01:13:22.364632   52813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 01:13:22.364680   52813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 01:13:22.364759   52813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 01:13:22.364837   52813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 01:13:22.364914   52813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 01:13:22.365001   52813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 01:13:22.366756   52813 out.go:204]   - Booting up control plane ...
	I0907 01:13:22.366861   52813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 01:13:22.366985   52813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 01:13:22.367064   52813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 01:13:22.367181   52813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 01:13:22.367325   52813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 01:13:22.367398   52813 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 01:13:22.367573   52813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 01:13:22.367679   52813 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503036 seconds
	I0907 01:13:22.367819   52813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 01:13:22.367998   52813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 01:13:22.368081   52813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 01:13:22.368308   52813 kubeadm.go:322] [mark-control-plane] Marking the node auto-965889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 01:13:22.368384   52813 kubeadm.go:322] [bootstrap-token] Using token: d0p30r.g3p74lftgq3p8e4r
	I0907 01:13:22.370762   52813 out.go:204]   - Configuring RBAC rules ...
	I0907 01:13:22.370890   52813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 01:13:22.370988   52813 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 01:13:22.371154   52813 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 01:13:22.371314   52813 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 01:13:22.371489   52813 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 01:13:22.371611   52813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 01:13:22.371709   52813 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 01:13:22.371750   52813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 01:13:22.371792   52813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 01:13:22.371798   52813 kubeadm.go:322] 
	I0907 01:13:22.371849   52813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 01:13:22.371854   52813 kubeadm.go:322] 
	I0907 01:13:22.371914   52813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 01:13:22.371921   52813 kubeadm.go:322] 
	I0907 01:13:22.371941   52813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 01:13:22.372008   52813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 01:13:22.372057   52813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 01:13:22.372071   52813 kubeadm.go:322] 
	I0907 01:13:22.372132   52813 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 01:13:22.372139   52813 kubeadm.go:322] 
	I0907 01:13:22.372207   52813 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 01:13:22.372216   52813 kubeadm.go:322] 
	I0907 01:13:22.372297   52813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 01:13:22.372415   52813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 01:13:22.372515   52813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 01:13:22.372529   52813 kubeadm.go:322] 
	I0907 01:13:22.372615   52813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 01:13:22.372708   52813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 01:13:22.372719   52813 kubeadm.go:322] 
	I0907 01:13:22.372813   52813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token d0p30r.g3p74lftgq3p8e4r \
	I0907 01:13:22.372935   52813 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 01:13:22.372973   52813 kubeadm.go:322] 	--control-plane 
	I0907 01:13:22.372982   52813 kubeadm.go:322] 
	I0907 01:13:22.373110   52813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 01:13:22.373119   52813 kubeadm.go:322] 
	I0907 01:13:22.373231   52813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token d0p30r.g3p74lftgq3p8e4r \
	I0907 01:13:22.373386   52813 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 01:13:22.373398   52813 cni.go:84] Creating CNI manager for ""
	I0907 01:13:22.373404   52813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 01:13:22.375113   52813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 01:13:22.376390   52813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 01:13:22.412585   52813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 01:13:22.475115   52813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 01:13:22.475195   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:22.475205   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=auto-965889 minikube.k8s.io/updated_at=2023_09_07T01_13_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:22.735551   52813 ops.go:34] apiserver oom_adj: -16
	I0907 01:13:22.735703   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:22.821567   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:23.965168   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:23.965776   53267 main.go:141] libmachine: (kindnet-965889) DBG | unable to find current IP address of domain kindnet-965889 in network mk-kindnet-965889
	I0907 01:13:23.965796   53267 main.go:141] libmachine: (kindnet-965889) DBG | I0907 01:13:23.965736   53328 retry.go:31] will retry after 6.398537715s: waiting for machine to come up
	I0907 01:13:23.428594   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:23.928016   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:24.428378   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:24.927795   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:25.428609   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:25.928376   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:26.428037   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:26.927830   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:27.428360   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:27.928486   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:30.368804   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.369268   53267 main.go:141] libmachine: (kindnet-965889) Found IP for machine: 192.168.50.23
	I0907 01:13:30.369299   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has current primary IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.369309   53267 main.go:141] libmachine: (kindnet-965889) Reserving static IP address...
	I0907 01:13:30.369718   53267 main.go:141] libmachine: (kindnet-965889) DBG | unable to find host DHCP lease matching {name: "kindnet-965889", mac: "52:54:00:ab:98:a9", ip: "192.168.50.23"} in network mk-kindnet-965889
	I0907 01:13:30.448586   53267 main.go:141] libmachine: (kindnet-965889) DBG | Getting to WaitForSSH function...
	I0907 01:13:30.448614   53267 main.go:141] libmachine: (kindnet-965889) Reserved static IP address: 192.168.50.23
	I0907 01:13:30.448668   53267 main.go:141] libmachine: (kindnet-965889) Waiting for SSH to be available...
	I0907 01:13:30.451500   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.452086   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:30.452118   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.452257   53267 main.go:141] libmachine: (kindnet-965889) DBG | Using SSH client type: external
	I0907 01:13:30.452284   53267 main.go:141] libmachine: (kindnet-965889) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa (-rw-------)
	I0907 01:13:30.452320   53267 main.go:141] libmachine: (kindnet-965889) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 01:13:30.452337   53267 main.go:141] libmachine: (kindnet-965889) DBG | About to run SSH command:
	I0907 01:13:30.452348   53267 main.go:141] libmachine: (kindnet-965889) DBG | exit 0
	I0907 01:13:30.543258   53267 main.go:141] libmachine: (kindnet-965889) DBG | SSH cmd err, output: <nil>: 
	I0907 01:13:30.543544   53267 main.go:141] libmachine: (kindnet-965889) KVM machine creation complete!
	I0907 01:13:30.543871   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetConfigRaw
	I0907 01:13:30.544391   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:30.544629   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:30.544831   53267 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0907 01:13:30.544847   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetState
	I0907 01:13:30.546272   53267 main.go:141] libmachine: Detecting operating system of created instance...
	I0907 01:13:30.546290   53267 main.go:141] libmachine: Waiting for SSH to be available...
	I0907 01:13:30.546299   53267 main.go:141] libmachine: Getting to WaitForSSH function...
	I0907 01:13:30.546309   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:30.548735   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.549150   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:30.549184   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.549276   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:30.549429   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.549600   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.549753   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:30.549894   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:30.550531   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:30.550550   53267 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0907 01:13:30.662225   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:13:30.662255   53267 main.go:141] libmachine: Detecting the provisioner...
	I0907 01:13:30.662266   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:30.665002   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.665345   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:30.665379   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.665501   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:30.665667   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.665800   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.665939   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:30.666089   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:30.666512   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:30.666524   53267 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0907 01:13:30.780093   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0907 01:13:30.780201   53267 main.go:141] libmachine: found compatible host: buildroot
	I0907 01:13:30.780216   53267 main.go:141] libmachine: Provisioning with buildroot...
	I0907 01:13:30.780230   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetMachineName
	I0907 01:13:30.780491   53267 buildroot.go:166] provisioning hostname "kindnet-965889"
	I0907 01:13:30.780512   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetMachineName
	I0907 01:13:30.780705   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:30.783203   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.783570   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:30.783601   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.783694   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:30.783913   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.784095   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.784258   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:30.784454   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:30.784878   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:30.784891   53267 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-965889 && echo "kindnet-965889" | sudo tee /etc/hostname
	I0907 01:13:30.914203   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-965889
	
	I0907 01:13:30.914239   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:30.917200   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.917533   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:30.917569   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:30.917742   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:30.917933   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.918120   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:30.918270   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:30.918457   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:30.918970   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:30.919014   53267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-965889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-965889/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-965889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 01:13:31.040456   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:13:31.040488   53267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 01:13:31.040527   53267 buildroot.go:174] setting up certificates
	I0907 01:13:31.040536   53267 provision.go:83] configureAuth start
	I0907 01:13:31.040559   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetMachineName
	I0907 01:13:31.040858   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetIP
	I0907 01:13:31.043260   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.043604   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.043650   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.043787   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:31.046055   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.046583   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.046611   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.046808   53267 provision.go:138] copyHostCerts
	I0907 01:13:31.046873   53267 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 01:13:31.046896   53267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 01:13:31.046964   53267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 01:13:31.047121   53267 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 01:13:31.047134   53267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 01:13:31.047168   53267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 01:13:31.047256   53267 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 01:13:31.047263   53267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 01:13:31.047282   53267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 01:13:31.047340   53267 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.kindnet-965889 san=[192.168.50.23 192.168.50.23 localhost 127.0.0.1 minikube kindnet-965889]
	I0907 01:13:31.328920   53267 provision.go:172] copyRemoteCerts
	I0907 01:13:31.328970   53267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 01:13:31.328992   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:31.331701   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.332018   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.332063   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.332217   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:31.332445   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.332570   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:31.332715   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:13:31.420290   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 01:13:31.447247   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0907 01:13:31.472189   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 01:13:31.497948   53267 provision.go:86] duration metric: configureAuth took 457.398579ms
	I0907 01:13:31.497977   53267 buildroot.go:189] setting minikube options for container-runtime
	I0907 01:13:31.498192   53267 config.go:182] Loaded profile config "kindnet-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:31.498268   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:31.500805   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.501237   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.501270   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.501431   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:31.501674   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.501887   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.502078   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:31.502246   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:31.502931   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:31.502962   53267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 01:13:32.079790   53935 start.go:369] acquired machines lock for "calico-965889" in 16.588580902s
	I0907 01:13:32.079848   53935 start.go:93] Provisioning new machine with config: &{Name:calico-965889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:calico-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:13:32.079980   53935 start.go:125] createHost starting for "" (driver="kvm2")
	I0907 01:13:28.428298   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:28.927910   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:29.427725   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:29.928028   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:30.427794   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:30.927798   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:31.428319   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:31.928648   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:32.427963   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:32.928653   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:31.820682   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 01:13:31.820713   53267 main.go:141] libmachine: Checking connection to Docker...
	I0907 01:13:31.820726   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetURL
	I0907 01:13:31.822000   53267 main.go:141] libmachine: (kindnet-965889) DBG | Using libvirt version 6000000
	I0907 01:13:31.824318   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.824719   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.824751   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.824928   53267 main.go:141] libmachine: Docker is up and running!
	I0907 01:13:31.824948   53267 main.go:141] libmachine: Reticulating splines...
	I0907 01:13:31.824957   53267 client.go:171] LocalClient.Create took 30.822681588s
	I0907 01:13:31.824982   53267 start.go:167] duration metric: libmachine.API.Create for "kindnet-965889" took 30.822775763s
	I0907 01:13:31.824992   53267 start.go:300] post-start starting for "kindnet-965889" (driver="kvm2")
	I0907 01:13:31.825005   53267 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 01:13:31.825035   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:31.825260   53267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 01:13:31.825282   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:31.828287   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.828663   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.828709   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.828860   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:31.829014   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.829191   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:31.829310   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:13:31.916768   53267 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 01:13:31.921040   53267 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 01:13:31.921067   53267 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 01:13:31.921148   53267 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 01:13:31.921226   53267 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 01:13:31.921304   53267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 01:13:31.931497   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:13:31.955144   53267 start.go:303] post-start completed in 130.136384ms
	I0907 01:13:31.955198   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetConfigRaw
	I0907 01:13:31.955904   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetIP
	I0907 01:13:31.958976   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.959343   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.959380   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.959634   53267 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/config.json ...
	I0907 01:13:31.959853   53267 start.go:128] duration metric: createHost completed in 30.979118297s
	I0907 01:13:31.959885   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:31.962478   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.962847   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:31.962887   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:31.962989   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:31.963176   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.963388   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:31.963510   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:31.963664   53267 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:31.964087   53267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0907 01:13:31.964104   53267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 01:13:32.079600   53267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694049212.067763100
	
	I0907 01:13:32.079624   53267 fix.go:206] guest clock: 1694049212.067763100
	I0907 01:13:32.079634   53267 fix.go:219] Guest: 2023-09-07 01:13:32.0677631 +0000 UTC Remote: 2023-09-07 01:13:31.959869974 +0000 UTC m=+35.252776570 (delta=107.893126ms)
	I0907 01:13:32.079661   53267 fix.go:190] guest clock delta is within tolerance: 107.893126ms
	I0907 01:13:32.079667   53267 start.go:83] releasing machines lock for "kindnet-965889", held for 31.099123536s
	I0907 01:13:32.079700   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:32.079977   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetIP
	I0907 01:13:32.083011   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.083494   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:32.083529   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.083718   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:32.084418   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:32.084622   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:13:32.084707   53267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 01:13:32.084756   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:32.084866   53267 ssh_runner.go:195] Run: cat /version.json
	I0907 01:13:32.084894   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:13:32.087659   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.087959   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.087997   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:32.088023   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.088284   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:32.088406   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:32.088445   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:32.088486   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:32.088610   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:13:32.088713   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:32.088807   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:13:32.088901   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:13:32.088932   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:13:32.089056   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:13:32.175880   53267 ssh_runner.go:195] Run: systemctl --version
	I0907 01:13:32.201521   53267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 01:13:32.700662   53267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 01:13:32.708946   53267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 01:13:32.709026   53267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 01:13:32.725986   53267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 01:13:32.726005   53267 start.go:466] detecting cgroup driver to use...
	I0907 01:13:32.726061   53267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 01:13:32.743547   53267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 01:13:32.757303   53267 docker.go:196] disabling cri-docker service (if available) ...
	I0907 01:13:32.757366   53267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 01:13:32.770933   53267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 01:13:32.784542   53267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 01:13:32.903003   53267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 01:13:33.030323   53267 docker.go:212] disabling docker service ...
	I0907 01:13:33.030392   53267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 01:13:33.047167   53267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 01:13:33.060346   53267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 01:13:33.179688   53267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 01:13:33.296636   53267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 01:13:33.311092   53267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 01:13:33.328507   53267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 01:13:33.328571   53267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:13:33.338467   53267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 01:13:33.338537   53267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:13:33.348408   53267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:13:33.358373   53267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:13:33.368155   53267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 01:13:33.378552   53267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 01:13:33.388228   53267 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 01:13:33.388281   53267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 01:13:33.401671   53267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 01:13:33.411520   53267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:13:33.571726   53267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 01:13:33.758813   53267 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 01:13:33.758894   53267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 01:13:33.764242   53267 start.go:534] Will wait 60s for crictl version
	I0907 01:13:33.764290   53267 ssh_runner.go:195] Run: which crictl
	I0907 01:13:33.769017   53267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 01:13:33.805348   53267 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 01:13:33.805434   53267 ssh_runner.go:195] Run: crio --version
	I0907 01:13:33.861119   53267 ssh_runner.go:195] Run: crio --version
	I0907 01:13:33.936971   53267 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 01:13:33.427751   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:33.927759   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:34.428726   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:34.927863   52813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:35.148997   52813 kubeadm.go:1081] duration metric: took 12.673880857s to wait for elevateKubeSystemPrivileges.
	I0907 01:13:35.149037   52813 kubeadm.go:406] StartCluster complete in 26.006689557s
	I0907 01:13:35.149064   52813 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:35.149157   52813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:13:35.150457   52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:35.152824   52813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 01:13:35.153108   52813 config.go:182] Loaded profile config "auto-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:35.153192   52813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 01:13:35.153360   52813 addons.go:69] Setting storage-provisioner=true in profile "auto-965889"
	I0907 01:13:35.153387   52813 addons.go:231] Setting addon storage-provisioner=true in "auto-965889"
	I0907 01:13:35.153405   52813 addons.go:69] Setting default-storageclass=true in profile "auto-965889"
	I0907 01:13:35.153437   52813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-965889"
	I0907 01:13:35.153458   52813 host.go:66] Checking if "auto-965889" exists ...
	I0907 01:13:35.153969   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:35.154036   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:35.153989   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:35.154199   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:35.177701   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0907 01:13:35.177884   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0907 01:13:35.178560   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:35.178751   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:35.179439   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:13:35.179457   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:35.179557   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:13:35.179582   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:35.180000   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:35.180083   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:35.180279   52813 main.go:141] libmachine: (auto-965889) Calling .GetState
	I0907 01:13:35.180783   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:35.180843   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:35.195616   52813 addons.go:231] Setting addon default-storageclass=true in "auto-965889"
	I0907 01:13:35.195680   52813 host.go:66] Checking if "auto-965889" exists ...
	I0907 01:13:35.196172   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:35.196200   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:35.201486   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I0907 01:13:35.202041   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:35.202607   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:13:35.202625   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:35.203065   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:35.206301   52813 main.go:141] libmachine: (auto-965889) Calling .GetState
	I0907 01:13:35.208643   52813 main.go:141] libmachine: (auto-965889) Calling .DriverName
	I0907 01:13:35.216984   52813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 01:13:35.217155   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0907 01:13:35.219341   52813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:13:35.219357   52813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 01:13:35.219381   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHHostname
	I0907 01:13:35.220798   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:35.221584   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:13:35.221636   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:35.222850   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:35.222995   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:13:35.223349   52813 main.go:141] libmachine: (auto-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:ac:98", ip: ""} in network mk-auto-965889: {Iface:virbr1 ExpiryTime:2023-09-07 02:12:53 +0000 UTC Type:0 Mac:52:54:00:61:ac:98 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:auto-965889 Clientid:01:52:54:00:61:ac:98}
	I0907 01:13:35.223380   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined IP address 192.168.61.35 and MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:13:35.223972   52813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:35.224032   52813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:35.224397   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHPort
	I0907 01:13:35.224572   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHKeyPath
	I0907 01:13:35.225122   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHUsername
	I0907 01:13:35.225308   52813 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/id_rsa Username:docker}
	I0907 01:13:35.241598   52813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0907 01:13:35.242002   52813 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:35.242619   52813 main.go:141] libmachine: Using API Version  1
	I0907 01:13:35.242640   52813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:35.243620   52813 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:35.243864   52813 main.go:141] libmachine: (auto-965889) Calling .GetState
	I0907 01:13:35.246048   52813 main.go:141] libmachine: (auto-965889) Calling .DriverName
	I0907 01:13:35.246327   52813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 01:13:35.246341   52813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 01:13:35.246359   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHHostname
	I0907 01:13:35.249628   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:13:35.250018   52813 main.go:141] libmachine: (auto-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:ac:98", ip: ""} in network mk-auto-965889: {Iface:virbr1 ExpiryTime:2023-09-07 02:12:53 +0000 UTC Type:0 Mac:52:54:00:61:ac:98 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:auto-965889 Clientid:01:52:54:00:61:ac:98}
	I0907 01:13:35.250050   52813 main.go:141] libmachine: (auto-965889) DBG | domain auto-965889 has defined IP address 192.168.61.35 and MAC address 52:54:00:61:ac:98 in network mk-auto-965889
	I0907 01:13:35.250226   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHPort
	I0907 01:13:35.250401   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHKeyPath
	I0907 01:13:35.250548   52813 main.go:141] libmachine: (auto-965889) Calling .GetSSHUsername
	I0907 01:13:35.250671   52813 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/auto-965889/id_rsa Username:docker}
	I0907 01:13:35.345608   52813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-965889" context rescaled to 1 replicas
	I0907 01:13:35.345686   52813 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:13:35.347573   52813 out.go:177] * Verifying Kubernetes components...
	I0907 01:13:32.082020   53935 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0907 01:13:32.082229   53935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:13:32.082290   53935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:13:32.099033   53935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0907 01:13:32.099510   53935 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:13:32.100291   53935 main.go:141] libmachine: Using API Version  1
	I0907 01:13:32.100323   53935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:13:32.100670   53935 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:13:32.100857   53935 main.go:141] libmachine: (calico-965889) Calling .GetMachineName
	I0907 01:13:32.101007   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:32.101163   53935 start.go:159] libmachine.API.Create for "calico-965889" (driver="kvm2")
	I0907 01:13:32.101188   53935 client.go:168] LocalClient.Create starting
	I0907 01:13:32.101223   53935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem
	I0907 01:13:32.101271   53935 main.go:141] libmachine: Decoding PEM data...
	I0907 01:13:32.101297   53935 main.go:141] libmachine: Parsing certificate...
	I0907 01:13:32.101380   53935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem
	I0907 01:13:32.101412   53935 main.go:141] libmachine: Decoding PEM data...
	I0907 01:13:32.101430   53935 main.go:141] libmachine: Parsing certificate...
	I0907 01:13:32.101457   53935 main.go:141] libmachine: Running pre-create checks...
	I0907 01:13:32.101472   53935 main.go:141] libmachine: (calico-965889) Calling .PreCreateCheck
	I0907 01:13:32.101807   53935 main.go:141] libmachine: (calico-965889) Calling .GetConfigRaw
	I0907 01:13:32.102229   53935 main.go:141] libmachine: Creating machine...
	I0907 01:13:32.102244   53935 main.go:141] libmachine: (calico-965889) Calling .Create
	I0907 01:13:32.102364   53935 main.go:141] libmachine: (calico-965889) Creating KVM machine...
	I0907 01:13:32.103634   53935 main.go:141] libmachine: (calico-965889) DBG | found existing default KVM network
	I0907 01:13:32.104831   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.104688   54041 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:f4:64} reservation:<nil>}
	I0907 01:13:32.106110   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.105994   54041 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:35:6a:92} reservation:<nil>}
	I0907 01:13:32.107133   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.107067   54041 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:3a:38} reservation:<nil>}
	I0907 01:13:32.108382   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.108293   54041 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003a7360}
	I0907 01:13:32.113914   53935 main.go:141] libmachine: (calico-965889) DBG | trying to create private KVM network mk-calico-965889 192.168.72.0/24...
	I0907 01:13:32.192773   53935 main.go:141] libmachine: (calico-965889) DBG | private KVM network mk-calico-965889 192.168.72.0/24 created
	I0907 01:13:32.192828   53935 main.go:141] libmachine: (calico-965889) Setting up store path in /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889 ...
	I0907 01:13:32.192865   53935 main.go:141] libmachine: (calico-965889) Building disk image from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0907 01:13:32.192884   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.192749   54041 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:13:32.192902   53935 main.go:141] libmachine: (calico-965889) Downloading /home/jenkins/minikube-integration/17174-6470/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0907 01:13:32.463911   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.463785   54041 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa...
	I0907 01:13:32.603679   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.603549   54041 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/calico-965889.rawdisk...
	I0907 01:13:32.603723   53935 main.go:141] libmachine: (calico-965889) DBG | Writing magic tar header
	I0907 01:13:32.603738   53935 main.go:141] libmachine: (calico-965889) DBG | Writing SSH key tar header
	I0907 01:13:32.614480   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:32.614344   54041 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889 ...
	I0907 01:13:32.614519   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889 (perms=drwx------)
	I0907 01:13:32.614534   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889
	I0907 01:13:32.614552   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube/machines
	I0907 01:13:32.614563   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:13:32.614575   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6470
	I0907 01:13:32.614585   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0907 01:13:32.614601   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home/jenkins
	I0907 01:13:32.614616   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube/machines (perms=drwxr-xr-x)
	I0907 01:13:32.614625   53935 main.go:141] libmachine: (calico-965889) DBG | Checking permissions on dir: /home
	I0907 01:13:32.614633   53935 main.go:141] libmachine: (calico-965889) DBG | Skipping /home - not owner
	I0907 01:13:32.614648   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470/.minikube (perms=drwxr-xr-x)
	I0907 01:13:32.614660   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins/minikube-integration/17174-6470 (perms=drwxrwxr-x)
	I0907 01:13:32.614672   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 01:13:32.614702   53935 main.go:141] libmachine: (calico-965889) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 01:13:32.614719   53935 main.go:141] libmachine: (calico-965889) Creating domain...
	I0907 01:13:32.616045   53935 main.go:141] libmachine: (calico-965889) define libvirt domain using xml: 
	I0907 01:13:32.616084   53935 main.go:141] libmachine: (calico-965889) <domain type='kvm'>
	I0907 01:13:32.616096   53935 main.go:141] libmachine: (calico-965889)   <name>calico-965889</name>
	I0907 01:13:32.616104   53935 main.go:141] libmachine: (calico-965889)   <memory unit='MiB'>3072</memory>
	I0907 01:13:32.616113   53935 main.go:141] libmachine: (calico-965889)   <vcpu>2</vcpu>
	I0907 01:13:32.616121   53935 main.go:141] libmachine: (calico-965889)   <features>
	I0907 01:13:32.616131   53935 main.go:141] libmachine: (calico-965889)     <acpi/>
	I0907 01:13:32.616150   53935 main.go:141] libmachine: (calico-965889)     <apic/>
	I0907 01:13:32.616160   53935 main.go:141] libmachine: (calico-965889)     <pae/>
	I0907 01:13:32.616176   53935 main.go:141] libmachine: (calico-965889)     
	I0907 01:13:32.616189   53935 main.go:141] libmachine: (calico-965889)   </features>
	I0907 01:13:32.616196   53935 main.go:141] libmachine: (calico-965889)   <cpu mode='host-passthrough'>
	I0907 01:13:32.616206   53935 main.go:141] libmachine: (calico-965889)   
	I0907 01:13:32.616214   53935 main.go:141] libmachine: (calico-965889)   </cpu>
	I0907 01:13:32.616221   53935 main.go:141] libmachine: (calico-965889)   <os>
	I0907 01:13:32.616229   53935 main.go:141] libmachine: (calico-965889)     <type>hvm</type>
	I0907 01:13:32.616250   53935 main.go:141] libmachine: (calico-965889)     <boot dev='cdrom'/>
	I0907 01:13:32.616264   53935 main.go:141] libmachine: (calico-965889)     <boot dev='hd'/>
	I0907 01:13:32.616278   53935 main.go:141] libmachine: (calico-965889)     <bootmenu enable='no'/>
	I0907 01:13:32.616286   53935 main.go:141] libmachine: (calico-965889)   </os>
	I0907 01:13:32.616292   53935 main.go:141] libmachine: (calico-965889)   <devices>
	I0907 01:13:32.616300   53935 main.go:141] libmachine: (calico-965889)     <disk type='file' device='cdrom'>
	I0907 01:13:32.616311   53935 main.go:141] libmachine: (calico-965889)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/boot2docker.iso'/>
	I0907 01:13:32.616326   53935 main.go:141] libmachine: (calico-965889)       <target dev='hdc' bus='scsi'/>
	I0907 01:13:32.616340   53935 main.go:141] libmachine: (calico-965889)       <readonly/>
	I0907 01:13:32.616348   53935 main.go:141] libmachine: (calico-965889)     </disk>
	I0907 01:13:32.616364   53935 main.go:141] libmachine: (calico-965889)     <disk type='file' device='disk'>
	I0907 01:13:32.616381   53935 main.go:141] libmachine: (calico-965889)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 01:13:32.616400   53935 main.go:141] libmachine: (calico-965889)       <source file='/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/calico-965889.rawdisk'/>
	I0907 01:13:32.616413   53935 main.go:141] libmachine: (calico-965889)       <target dev='hda' bus='virtio'/>
	I0907 01:13:32.616426   53935 main.go:141] libmachine: (calico-965889)     </disk>
	I0907 01:13:32.616436   53935 main.go:141] libmachine: (calico-965889)     <interface type='network'>
	I0907 01:13:32.616452   53935 main.go:141] libmachine: (calico-965889)       <source network='mk-calico-965889'/>
	I0907 01:13:32.616465   53935 main.go:141] libmachine: (calico-965889)       <model type='virtio'/>
	I0907 01:13:32.616485   53935 main.go:141] libmachine: (calico-965889)     </interface>
	I0907 01:13:32.616495   53935 main.go:141] libmachine: (calico-965889)     <interface type='network'>
	I0907 01:13:32.616502   53935 main.go:141] libmachine: (calico-965889)       <source network='default'/>
	I0907 01:13:32.616511   53935 main.go:141] libmachine: (calico-965889)       <model type='virtio'/>
	I0907 01:13:32.616523   53935 main.go:141] libmachine: (calico-965889)     </interface>
	I0907 01:13:32.616537   53935 main.go:141] libmachine: (calico-965889)     <serial type='pty'>
	I0907 01:13:32.616547   53935 main.go:141] libmachine: (calico-965889)       <target port='0'/>
	I0907 01:13:32.616555   53935 main.go:141] libmachine: (calico-965889)     </serial>
	I0907 01:13:32.616565   53935 main.go:141] libmachine: (calico-965889)     <console type='pty'>
	I0907 01:13:32.616574   53935 main.go:141] libmachine: (calico-965889)       <target type='serial' port='0'/>
	I0907 01:13:32.616584   53935 main.go:141] libmachine: (calico-965889)     </console>
	I0907 01:13:32.616592   53935 main.go:141] libmachine: (calico-965889)     <rng model='virtio'>
	I0907 01:13:32.616599   53935 main.go:141] libmachine: (calico-965889)       <backend model='random'>/dev/random</backend>
	I0907 01:13:32.616605   53935 main.go:141] libmachine: (calico-965889)     </rng>
	I0907 01:13:32.616613   53935 main.go:141] libmachine: (calico-965889)     
	I0907 01:13:32.616621   53935 main.go:141] libmachine: (calico-965889)     
	I0907 01:13:32.616631   53935 main.go:141] libmachine: (calico-965889)   </devices>
	I0907 01:13:32.616639   53935 main.go:141] libmachine: (calico-965889) </domain>
	I0907 01:13:32.616653   53935 main.go:141] libmachine: (calico-965889) 
	I0907 01:13:32.689807   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:ab:8d:f1 in network default
	I0907 01:13:32.690449   53935 main.go:141] libmachine: (calico-965889) Ensuring networks are active...
	I0907 01:13:32.690477   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:32.691277   53935 main.go:141] libmachine: (calico-965889) Ensuring network default is active
	I0907 01:13:32.691624   53935 main.go:141] libmachine: (calico-965889) Ensuring network mk-calico-965889 is active
	I0907 01:13:32.692204   53935 main.go:141] libmachine: (calico-965889) Getting domain xml...
	I0907 01:13:32.693109   53935 main.go:141] libmachine: (calico-965889) Creating domain...
	I0907 01:13:34.287449   53935 main.go:141] libmachine: (calico-965889) Waiting to get IP...
	I0907 01:13:34.289390   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:34.290082   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:34.290166   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:34.290085   54041 retry.go:31] will retry after 245.549405ms: waiting for machine to come up
	I0907 01:13:34.537824   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:34.538542   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:34.538566   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:34.538424   54041 retry.go:31] will retry after 376.007986ms: waiting for machine to come up
	I0907 01:13:34.915847   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:34.916343   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:34.916367   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:34.916276   54041 retry.go:31] will retry after 436.996597ms: waiting for machine to come up
	I0907 01:13:35.355271   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:35.355670   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:35.355699   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:35.355500   54041 retry.go:31] will retry after 569.908636ms: waiting for machine to come up
	I0907 01:13:33.938522   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetIP
	I0907 01:13:33.942078   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:33.942726   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:13:33.942767   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:13:33.943175   53267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 01:13:33.947702   53267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:13:33.962933   53267 localpath.go:92] copying /home/jenkins/minikube-integration/17174-6470/.minikube/client.crt -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/client.crt
	I0907 01:13:33.963093   53267 localpath.go:117] copying /home/jenkins/minikube-integration/17174-6470/.minikube/client.key -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/client.key
	I0907 01:13:33.963215   53267 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:13:33.963287   53267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:13:34.002478   53267 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 01:13:34.002595   53267 ssh_runner.go:195] Run: which lz4
	I0907 01:13:34.008802   53267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 01:13:34.013392   53267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 01:13:34.013431   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 01:13:35.986366   53267 crio.go:444] Took 1.977608 seconds to copy over tarball
	I0907 01:13:35.986430   53267 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 01:13:35.349279   52813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:13:35.666363   52813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 01:13:35.667627   52813 node_ready.go:35] waiting up to 15m0s for node "auto-965889" to be "Ready" ...
	I0907 01:13:35.682517   52813 node_ready.go:49] node "auto-965889" has status "Ready":"True"
	I0907 01:13:35.682542   52813 node_ready.go:38] duration metric: took 14.894822ms waiting for node "auto-965889" to be "Ready" ...
	I0907 01:13:35.682555   52813 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 01:13:35.716811   52813 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace to be "Ready" ...
	I0907 01:13:35.775205   52813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 01:13:35.781316   52813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:13:37.474107   52813 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.80769683s)
	I0907 01:13:37.474127   52813 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.698889948s)
	I0907 01:13:37.474142   52813 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 01:13:37.474168   52813 main.go:141] libmachine: Making call to close driver server
	I0907 01:13:37.474183   52813 main.go:141] libmachine: (auto-965889) Calling .Close
	I0907 01:13:37.476073   52813 main.go:141] libmachine: (auto-965889) DBG | Closing plugin on server side
	I0907 01:13:37.476093   52813 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:13:37.476108   52813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:13:37.476128   52813 main.go:141] libmachine: Making call to close driver server
	I0907 01:13:37.476137   52813 main.go:141] libmachine: (auto-965889) Calling .Close
	I0907 01:13:37.476501   52813 main.go:141] libmachine: (auto-965889) DBG | Closing plugin on server side
	I0907 01:13:37.476549   52813 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:13:37.476560   52813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:13:37.476578   52813 main.go:141] libmachine: Making call to close driver server
	I0907 01:13:37.476587   52813 main.go:141] libmachine: (auto-965889) Calling .Close
	I0907 01:13:37.476822   52813 main.go:141] libmachine: (auto-965889) DBG | Closing plugin on server side
	I0907 01:13:37.476857   52813 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:13:37.476867   52813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:13:37.756299   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:37.895127   52813 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.113774825s)
	I0907 01:13:37.895174   52813 main.go:141] libmachine: Making call to close driver server
	I0907 01:13:37.895189   52813 main.go:141] libmachine: (auto-965889) Calling .Close
	I0907 01:13:37.897227   52813 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:13:37.897250   52813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:13:37.897262   52813 main.go:141] libmachine: Making call to close driver server
	I0907 01:13:37.897272   52813 main.go:141] libmachine: (auto-965889) Calling .Close
	I0907 01:13:37.897229   52813 main.go:141] libmachine: (auto-965889) DBG | Closing plugin on server side
	I0907 01:13:37.897630   52813 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:13:37.897650   52813 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:13:37.900780   52813 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0907 01:13:37.902311   52813 addons.go:502] enable addons completed in 2.749115074s: enabled=[default-storageclass storage-provisioner]
	I0907 01:13:35.927360   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:35.928190   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:35.928214   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:35.928108   54041 retry.go:31] will retry after 762.504891ms: waiting for machine to come up
	I0907 01:13:36.692181   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:36.692934   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:36.692957   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:36.692885   54041 retry.go:31] will retry after 764.999705ms: waiting for machine to come up
	I0907 01:13:37.459199   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:37.459880   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:37.459909   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:37.459832   54041 retry.go:31] will retry after 996.035802ms: waiting for machine to come up
	I0907 01:13:38.457977   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:38.458684   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:38.458709   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:38.458606   54041 retry.go:31] will retry after 1.448715678s: waiting for machine to come up
	I0907 01:13:39.909094   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:39.909645   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:39.909677   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:39.909594   54041 retry.go:31] will retry after 1.669286252s: waiting for machine to come up
	I0907 01:13:39.640935   53267 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.654484386s)
	I0907 01:13:39.640962   53267 crio.go:451] Took 3.654568 seconds to extract the tarball
	I0907 01:13:39.640972   53267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 01:13:39.688443   53267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:13:39.761126   53267 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 01:13:39.761149   53267 cache_images.go:84] Images are preloaded, skipping loading
	I0907 01:13:39.761224   53267 ssh_runner.go:195] Run: crio config
	I0907 01:13:39.821272   53267 cni.go:84] Creating CNI manager for "kindnet"
	I0907 01:13:39.821313   53267 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 01:13:39.821338   53267 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.23 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-965889 NodeName:kindnet-965889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 01:13:39.821532   53267 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-965889"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 01:13:39.821621   53267 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-965889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:kindnet-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0907 01:13:39.821685   53267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 01:13:39.842708   53267 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 01:13:39.842789   53267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 01:13:39.853785   53267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0907 01:13:39.871839   53267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 01:13:39.889748   53267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0907 01:13:39.908734   53267 ssh_runner.go:195] Run: grep 192.168.50.23	control-plane.minikube.internal$ /etc/hosts
	I0907 01:13:39.913854   53267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:13:39.928467   53267 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889 for IP: 192.168.50.23
	I0907 01:13:39.928510   53267 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:39.928691   53267 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 01:13:39.928770   53267 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 01:13:39.928869   53267 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/client.key
	I0907 01:13:39.928891   53267 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key.78dfbc3d
	I0907 01:13:39.928904   53267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt.78dfbc3d with IP's: [192.168.50.23 10.96.0.1 127.0.0.1 10.0.0.1]
	I0907 01:13:40.005056   53267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt.78dfbc3d ...
	I0907 01:13:40.005084   53267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt.78dfbc3d: {Name:mk5a08d8e211b19ceeb364bd4a86e86749617e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:40.005272   53267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key.78dfbc3d ...
	I0907 01:13:40.005286   53267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key.78dfbc3d: {Name:mkd2f372621d0bb3568d4bbfd21a39fef7bc30ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:40.005376   53267 certs.go:337] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt.78dfbc3d -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt
	I0907 01:13:40.005473   53267 certs.go:341] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key.78dfbc3d -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key
	I0907 01:13:40.005546   53267 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.key
	I0907 01:13:40.005571   53267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.crt with IP's: []
	I0907 01:13:40.258774   53267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.crt ...
	I0907 01:13:40.258816   53267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.crt: {Name:mk31dd76e989dde1f0245fc7c4eafdca30c38301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:40.338145   53267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.key ...
	I0907 01:13:40.338186   53267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.key: {Name:mkf73be9f263f356fed1c28de4625c2eaa38596f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:13:40.338547   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 01:13:40.338622   53267 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 01:13:40.338638   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 01:13:40.338673   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 01:13:40.338708   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 01:13:40.338754   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 01:13:40.338832   53267 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:13:40.339584   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 01:13:40.370555   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 01:13:40.398571   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 01:13:40.427442   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/kindnet-965889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 01:13:40.453719   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 01:13:40.480670   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 01:13:40.507808   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 01:13:40.538098   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 01:13:40.567037   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 01:13:40.592427   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 01:13:40.619293   53267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 01:13:40.645767   53267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 01:13:40.663940   53267 ssh_runner.go:195] Run: openssl version
	I0907 01:13:40.670893   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 01:13:40.685345   53267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 01:13:40.690515   53267 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 01:13:40.690583   53267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 01:13:40.697957   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 01:13:40.710028   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 01:13:40.722098   53267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:13:40.727424   53267 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:13:40.727494   53267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:13:40.734008   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 01:13:40.747578   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 01:13:40.761468   53267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 01:13:40.766710   53267 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 01:13:40.766768   53267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 01:13:40.772683   53267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 01:13:40.783072   53267 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 01:13:40.787556   53267 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 01:13:40.787616   53267 kubeadm.go:404] StartCluster: {Name:kindnet-965889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:kindnet-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:13:40.787806   53267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 01:13:40.787869   53267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 01:13:40.824458   53267 cri.go:89] found id: ""
	I0907 01:13:40.824553   53267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 01:13:40.835021   53267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 01:13:40.845225   53267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 01:13:40.855785   53267 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 01:13:40.855833   53267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 01:13:40.918734   53267 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 01:13:40.918901   53267 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 01:13:41.084655   53267 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 01:13:41.084839   53267 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 01:13:41.084989   53267 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 01:13:41.288504   53267 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 01:13:41.357915   53267 out.go:204]   - Generating certificates and keys ...
	I0907 01:13:41.358101   53267 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 01:13:41.358258   53267 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 01:13:41.654238   53267 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 01:13:40.485375   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:42.753425   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:41.786219   53267 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0907 01:13:41.965959   53267 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0907 01:13:42.170658   53267 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0907 01:13:42.307882   53267 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0907 01:13:42.308044   53267 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-965889 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0907 01:13:42.561062   53267 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0907 01:13:42.561303   53267 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-965889 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0907 01:13:43.071198   53267 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 01:13:43.236718   53267 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 01:13:43.399287   53267 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0907 01:13:43.399649   53267 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 01:13:43.489314   53267 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 01:13:43.574091   53267 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 01:13:43.710877   53267 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 01:13:43.785653   53267 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 01:13:43.786289   53267 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 01:13:43.788709   53267 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 01:13:41.580369   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:41.580865   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:41.580909   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:41.580818   54041 retry.go:31] will retry after 1.679276855s: waiting for machine to come up
	I0907 01:13:43.261495   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:43.261908   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:43.261942   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:43.261854   54041 retry.go:31] will retry after 2.421228053s: waiting for machine to come up
	I0907 01:13:43.790872   53267 out.go:204]   - Booting up control plane ...
	I0907 01:13:43.790981   53267 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 01:13:43.792294   53267 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 01:13:43.793745   53267 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 01:13:43.812927   53267 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 01:13:43.813417   53267 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 01:13:43.813495   53267 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 01:13:43.943181   53267 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 01:13:45.254800   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:47.752631   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:45.685010   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:45.685496   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:45.685522   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:45.685378   54041 retry.go:31] will retry after 3.211385172s: waiting for machine to come up
	I0907 01:13:48.897870   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:48.898356   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:48.898385   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:48.898304   54041 retry.go:31] will retry after 3.738597697s: waiting for machine to come up
	I0907 01:13:50.254967   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:52.751697   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:52.445408   53267 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503223 seconds
	I0907 01:13:52.445545   53267 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 01:13:52.463164   53267 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 01:13:52.996067   53267 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 01:13:52.996307   53267 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-965889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 01:13:53.511568   53267 kubeadm.go:322] [bootstrap-token] Using token: clxr1f.4yrs3uf3t9iwgxy9
	I0907 01:13:53.513048   53267 out.go:204]   - Configuring RBAC rules ...
	I0907 01:13:53.513181   53267 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 01:13:53.518926   53267 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 01:13:53.532559   53267 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 01:13:53.536584   53267 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 01:13:53.541806   53267 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 01:13:53.546263   53267 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 01:13:53.574535   53267 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 01:13:53.820300   53267 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 01:13:53.930515   53267 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 01:13:53.930555   53267 kubeadm.go:322] 
	I0907 01:13:53.930630   53267 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 01:13:53.930643   53267 kubeadm.go:322] 
	I0907 01:13:53.930740   53267 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 01:13:53.930752   53267 kubeadm.go:322] 
	I0907 01:13:53.930797   53267 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 01:13:53.930886   53267 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 01:13:53.930966   53267 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 01:13:53.930977   53267 kubeadm.go:322] 
	I0907 01:13:53.931036   53267 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 01:13:53.931048   53267 kubeadm.go:322] 
	I0907 01:13:53.931115   53267 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 01:13:53.931127   53267 kubeadm.go:322] 
	I0907 01:13:53.931195   53267 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 01:13:53.931298   53267 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 01:13:53.931405   53267 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 01:13:53.931419   53267 kubeadm.go:322] 
	I0907 01:13:53.931544   53267 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 01:13:53.931642   53267 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 01:13:53.931652   53267 kubeadm.go:322] 
	I0907 01:13:53.931769   53267 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token clxr1f.4yrs3uf3t9iwgxy9 \
	I0907 01:13:53.931943   53267 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 01:13:53.931979   53267 kubeadm.go:322] 	--control-plane 
	I0907 01:13:53.931989   53267 kubeadm.go:322] 
	I0907 01:13:53.932115   53267 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 01:13:53.932158   53267 kubeadm.go:322] 
	I0907 01:13:53.932278   53267 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token clxr1f.4yrs3uf3t9iwgxy9 \
	I0907 01:13:53.932450   53267 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 01:13:53.932597   53267 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 01:13:53.932615   53267 cni.go:84] Creating CNI manager for "kindnet"
	I0907 01:13:53.934395   53267 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0907 01:13:52.639039   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:52.639536   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find current IP address of domain calico-965889 in network mk-calico-965889
	I0907 01:13:52.639565   53935 main.go:141] libmachine: (calico-965889) DBG | I0907 01:13:52.639480   54041 retry.go:31] will retry after 5.055583383s: waiting for machine to come up
	I0907 01:13:53.935844   53267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 01:13:53.960015   53267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 01:13:53.960041   53267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0907 01:13:53.985842   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 01:13:54.993697   53267 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.007814599s)
	I0907 01:13:54.993752   53267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 01:13:54.993842   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:54.993882   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=kindnet-965889 minikube.k8s.io/updated_at=2023_09_07T01_13_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:55.030065   53267 ops.go:34] apiserver oom_adj: -16
	I0907 01:13:55.209669   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:55.296471   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:55.882471   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:56.382746   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:54.753584   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:57.251856   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:13:57.696162   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.696605   53935 main.go:141] libmachine: (calico-965889) Found IP for machine: 192.168.72.94
	I0907 01:13:57.696628   53935 main.go:141] libmachine: (calico-965889) Reserving static IP address...
	I0907 01:13:57.696638   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has current primary IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.697004   53935 main.go:141] libmachine: (calico-965889) DBG | unable to find host DHCP lease matching {name: "calico-965889", mac: "52:54:00:95:3d:94", ip: "192.168.72.94"} in network mk-calico-965889
	I0907 01:13:57.775375   53935 main.go:141] libmachine: (calico-965889) DBG | Getting to WaitForSSH function...
	I0907 01:13:57.775401   53935 main.go:141] libmachine: (calico-965889) Reserved static IP address: 192.168.72.94
	I0907 01:13:57.775413   53935 main.go:141] libmachine: (calico-965889) Waiting for SSH to be available...
	I0907 01:13:57.778519   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.778892   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:57.778922   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.779088   53935 main.go:141] libmachine: (calico-965889) DBG | Using SSH client type: external
	I0907 01:13:57.779119   53935 main.go:141] libmachine: (calico-965889) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa (-rw-------)
	I0907 01:13:57.779158   53935 main.go:141] libmachine: (calico-965889) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 01:13:57.779179   53935 main.go:141] libmachine: (calico-965889) DBG | About to run SSH command:
	I0907 01:13:57.779191   53935 main.go:141] libmachine: (calico-965889) DBG | exit 0
	I0907 01:13:57.878742   53935 main.go:141] libmachine: (calico-965889) DBG | SSH cmd err, output: <nil>: 
	I0907 01:13:57.879035   53935 main.go:141] libmachine: (calico-965889) KVM machine creation complete!
	I0907 01:13:57.879384   53935 main.go:141] libmachine: (calico-965889) Calling .GetConfigRaw
	I0907 01:13:57.879939   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:57.880167   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:57.880355   53935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0907 01:13:57.880376   53935 main.go:141] libmachine: (calico-965889) Calling .GetState
	I0907 01:13:57.881868   53935 main.go:141] libmachine: Detecting operating system of created instance...
	I0907 01:13:57.881886   53935 main.go:141] libmachine: Waiting for SSH to be available...
	I0907 01:13:57.881895   53935 main.go:141] libmachine: Getting to WaitForSSH function...
	I0907 01:13:57.881906   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:57.884869   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.885277   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:57.885310   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:57.885434   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:57.885627   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:57.885793   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:57.885967   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:57.886163   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:57.886876   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:57.886895   53935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0907 01:13:58.026418   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:13:58.026460   53935 main.go:141] libmachine: Detecting the provisioner...
	I0907 01:13:58.026482   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.029574   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.029952   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.029981   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.030199   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:58.030398   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.030589   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.030803   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:58.030991   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:58.031394   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:58.031406   53935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0907 01:13:58.167682   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0907 01:13:58.167777   53935 main.go:141] libmachine: found compatible host: buildroot
	I0907 01:13:58.167792   53935 main.go:141] libmachine: Provisioning with buildroot...
	I0907 01:13:58.167803   53935 main.go:141] libmachine: (calico-965889) Calling .GetMachineName
	I0907 01:13:58.168068   53935 buildroot.go:166] provisioning hostname "calico-965889"
	I0907 01:13:58.168093   53935 main.go:141] libmachine: (calico-965889) Calling .GetMachineName
	I0907 01:13:58.168271   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.171244   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.171620   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.171649   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.171843   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:58.172032   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.172178   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.172323   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:58.172527   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:58.172922   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:58.172936   53935 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-965889 && echo "calico-965889" | sudo tee /etc/hostname
	I0907 01:13:58.325185   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-965889
	
	I0907 01:13:58.325220   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.328172   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.328507   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.328542   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.328742   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:58.328925   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.329094   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.329255   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:58.329528   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:58.330073   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:58.330099   53935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-965889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-965889/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-965889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 01:13:58.473808   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:13:58.473833   53935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 01:13:58.473849   53935 buildroot.go:174] setting up certificates
	I0907 01:13:58.473891   53935 provision.go:83] configureAuth start
	I0907 01:13:58.473900   53935 main.go:141] libmachine: (calico-965889) Calling .GetMachineName
	I0907 01:13:58.474159   53935 main.go:141] libmachine: (calico-965889) Calling .GetIP
	I0907 01:13:58.477034   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.477395   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.477427   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.477558   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.479994   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.480394   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.480426   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.480578   53935 provision.go:138] copyHostCerts
	I0907 01:13:58.480644   53935 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 01:13:58.480657   53935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 01:13:58.480738   53935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 01:13:58.480843   53935 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 01:13:58.480852   53935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 01:13:58.480874   53935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 01:13:58.480929   53935 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 01:13:58.480935   53935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 01:13:58.480951   53935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 01:13:58.480993   53935 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.calico-965889 san=[192.168.72.94 192.168.72.94 localhost 127.0.0.1 minikube calico-965889]
	I0907 01:13:58.718218   53935 provision.go:172] copyRemoteCerts
	I0907 01:13:58.718287   53935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 01:13:58.718317   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.721351   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.721744   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.721765   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.721970   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:58.722161   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.722334   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:58.722439   53935 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa Username:docker}
	I0907 01:13:58.820326   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 01:13:58.846468   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0907 01:13:58.872638   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 01:13:58.899374   53935 provision.go:86] duration metric: configureAuth took 425.469124ms
	I0907 01:13:58.899411   53935 buildroot.go:189] setting minikube options for container-runtime
	I0907 01:13:58.899596   53935 config.go:182] Loaded profile config "calico-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:13:58.899668   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:58.903829   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.904287   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:58.904320   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:58.904519   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:58.904769   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.904990   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:58.905169   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:58.905348   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:58.905957   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:58.905986   53935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 01:13:59.245949   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 01:13:59.245982   53935 main.go:141] libmachine: Checking connection to Docker...
	I0907 01:13:59.245990   53935 main.go:141] libmachine: (calico-965889) Calling .GetURL
	I0907 01:13:59.247449   53935 main.go:141] libmachine: (calico-965889) DBG | Using libvirt version 6000000
	I0907 01:13:59.250585   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.250954   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.251004   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.251193   53935 main.go:141] libmachine: Docker is up and running!
	I0907 01:13:59.251210   53935 main.go:141] libmachine: Reticulating splines...
	I0907 01:13:59.251217   53935 client.go:171] LocalClient.Create took 27.150022511s
	I0907 01:13:59.251243   53935 start.go:167] duration metric: libmachine.API.Create for "calico-965889" took 27.150079408s
	I0907 01:13:59.251264   53935 start.go:300] post-start starting for "calico-965889" (driver="kvm2")
	I0907 01:13:59.251275   53935 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 01:13:59.251295   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:59.251554   53935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 01:13:59.251586   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:59.254216   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.254542   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.254561   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.254708   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:59.254921   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:59.255089   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:59.255247   53935 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa Username:docker}
	I0907 01:13:59.353120   53935 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 01:13:59.357408   53935 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 01:13:59.357446   53935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 01:13:59.357535   53935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 01:13:59.357634   53935 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 01:13:59.357740   53935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 01:13:59.366290   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:13:59.392162   53935 start.go:303] post-start completed in 140.884414ms
	I0907 01:13:59.392208   53935 main.go:141] libmachine: (calico-965889) Calling .GetConfigRaw
	I0907 01:13:59.392800   53935 main.go:141] libmachine: (calico-965889) Calling .GetIP
	I0907 01:13:59.395946   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.396341   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.396369   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.396721   53935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/config.json ...
	I0907 01:13:59.396958   53935 start.go:128] duration metric: createHost completed in 27.316966608s
	I0907 01:13:59.397015   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:59.399740   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.400270   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.400300   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.400470   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:59.400701   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:59.400887   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:59.401026   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:59.401193   53935 main.go:141] libmachine: Using SSH client type: native
	I0907 01:13:59.401611   53935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0907 01:13:59.401626   53935 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 01:13:59.535661   53935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694049239.514472938
	
	I0907 01:13:59.535684   53935 fix.go:206] guest clock: 1694049239.514472938
	I0907 01:13:59.535691   53935 fix.go:219] Guest: 2023-09-07 01:13:59.514472938 +0000 UTC Remote: 2023-09-07 01:13:59.396972743 +0000 UTC m=+44.046038243 (delta=117.500195ms)
	I0907 01:13:59.535714   53935 fix.go:190] guest clock delta is within tolerance: 117.500195ms
	I0907 01:13:59.535721   53935 start.go:83] releasing machines lock for "calico-965889", held for 27.455903827s
	I0907 01:13:59.535744   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:59.536047   53935 main.go:141] libmachine: (calico-965889) Calling .GetIP
	I0907 01:13:59.538875   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.539202   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.539233   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.539455   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:59.539920   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:59.540126   53935 main.go:141] libmachine: (calico-965889) Calling .DriverName
	I0907 01:13:59.540230   53935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 01:13:59.540287   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:59.540399   53935 ssh_runner.go:195] Run: cat /version.json
	I0907 01:13:59.540424   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHHostname
	I0907 01:13:59.542949   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.543344   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.543375   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.543395   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.543537   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:59.543692   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:59.543915   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:13:59.543939   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:13:59.543950   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:59.544101   53935 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa Username:docker}
	I0907 01:13:59.544213   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHPort
	I0907 01:13:59.544397   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHKeyPath
	I0907 01:13:59.544568   53935 main.go:141] libmachine: (calico-965889) Calling .GetSSHUsername
	I0907 01:13:59.544751   53935 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/calico-965889/id_rsa Username:docker}
	I0907 01:13:59.671499   53935 ssh_runner.go:195] Run: systemctl --version
	I0907 01:13:59.678197   53935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 01:13:59.849081   53935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 01:13:59.856524   53935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 01:13:59.856610   53935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 01:13:59.872978   53935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 01:13:59.873002   53935 start.go:466] detecting cgroup driver to use...
	I0907 01:13:59.873069   53935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 01:13:59.892061   53935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 01:13:59.905648   53935 docker.go:196] disabling cri-docker service (if available) ...
	I0907 01:13:59.905749   53935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 01:13:59.919278   53935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 01:13:59.935916   53935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 01:14:00.058050   53935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 01:14:00.183303   53935 docker.go:212] disabling docker service ...
	I0907 01:14:00.183367   53935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 01:14:00.198416   53935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 01:14:00.211936   53935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 01:14:00.328724   53935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 01:14:00.472689   53935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 01:14:00.486667   53935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 01:14:00.505605   53935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 01:14:00.505694   53935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:14:00.515574   53935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 01:14:00.515629   53935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:14:00.525043   53935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:14:00.535330   53935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:14:00.545165   53935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 01:14:00.554870   53935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 01:14:00.563215   53935 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 01:14:00.563289   53935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 01:14:00.576364   53935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 01:14:00.585372   53935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:14:00.708988   53935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 01:14:00.898886   53935 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 01:14:00.898977   53935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 01:14:00.904561   53935 start.go:534] Will wait 60s for crictl version
	I0907 01:14:00.904624   53935 ssh_runner.go:195] Run: which crictl
	I0907 01:14:00.908905   53935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 01:14:00.942419   53935 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 01:14:00.942523   53935 ssh_runner.go:195] Run: crio --version
	I0907 01:14:00.992565   53935 ssh_runner.go:195] Run: crio --version
	I0907 01:14:01.055297   53935 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 01:13:56.882714   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:57.382699   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:57.882438   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:58.381960   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:58.882822   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:59.382547   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:59.882385   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:00.382875   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:00.882560   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:01.382484   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:13:59.254041   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:01.255586   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:01.056823   53935 main.go:141] libmachine: (calico-965889) Calling .GetIP
	I0907 01:14:01.059615   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:14:01.059991   53935 main.go:141] libmachine: (calico-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:3d:94", ip: ""} in network mk-calico-965889: {Iface:virbr2 ExpiryTime:2023-09-07 02:13:50 +0000 UTC Type:0 Mac:52:54:00:95:3d:94 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:calico-965889 Clientid:01:52:54:00:95:3d:94}
	I0907 01:14:01.060020   53935 main.go:141] libmachine: (calico-965889) DBG | domain calico-965889 has defined IP address 192.168.72.94 and MAC address 52:54:00:95:3d:94 in network mk-calico-965889
	I0907 01:14:01.060207   53935 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0907 01:14:01.064648   53935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:14:01.078821   53935 localpath.go:92] copying /home/jenkins/minikube-integration/17174-6470/.minikube/client.crt -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/client.crt
	I0907 01:14:01.078941   53935 localpath.go:117] copying /home/jenkins/minikube-integration/17174-6470/.minikube/client.key -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/client.key
	I0907 01:14:01.079059   53935 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:14:01.079103   53935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:14:01.106543   53935 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 01:14:01.106615   53935 ssh_runner.go:195] Run: which lz4
	I0907 01:14:01.111060   53935 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 01:14:01.115631   53935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 01:14:01.115666   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 01:14:02.903861   53935 crio.go:444] Took 1.792831 seconds to copy over tarball
	I0907 01:14:02.903941   53935 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 01:14:01.882315   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:02.381848   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:02.882614   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:03.382591   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:03.882042   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:04.382310   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:04.882700   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:05.382860   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:05.882370   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:06.382397   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:06.882456   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:07.382027   53267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:07.521975   53267 kubeadm.go:1081] duration metric: took 12.528191367s to wait for elevateKubeSystemPrivileges.
	I0907 01:14:07.522015   53267 kubeadm.go:406] StartCluster complete in 26.734404171s
	I0907 01:14:07.522035   53267 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:07.522129   53267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:14:07.523549   53267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:07.523791   53267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 01:14:07.523902   53267 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 01:14:07.523997   53267 addons.go:69] Setting storage-provisioner=true in profile "kindnet-965889"
	I0907 01:14:07.524008   53267 addons.go:69] Setting default-storageclass=true in profile "kindnet-965889"
	I0907 01:14:07.524016   53267 addons.go:231] Setting addon storage-provisioner=true in "kindnet-965889"
	I0907 01:14:07.524031   53267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-965889"
	I0907 01:14:07.524042   53267 config.go:182] Loaded profile config "kindnet-965889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:14:07.524056   53267 host.go:66] Checking if "kindnet-965889" exists ...
	I0907 01:14:07.524452   53267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:14:07.524467   53267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:14:07.524492   53267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:14:07.524493   53267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:14:07.544567   53267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0907 01:14:07.544687   53267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41615
	I0907 01:14:07.545444   53267 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:14:07.545455   53267 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:14:07.546036   53267 main.go:141] libmachine: Using API Version  1
	I0907 01:14:07.546051   53267 main.go:141] libmachine: Using API Version  1
	I0907 01:14:07.546063   53267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:14:07.546066   53267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:14:07.546389   53267 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:14:07.546498   53267 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:14:07.546683   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetState
	I0907 01:14:07.546967   53267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:14:07.546980   53267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:14:07.560992   53267 addons.go:231] Setting addon default-storageclass=true in "kindnet-965889"
	I0907 01:14:07.561052   53267 host.go:66] Checking if "kindnet-965889" exists ...
	I0907 01:14:07.561486   53267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:14:07.561521   53267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:14:07.563821   53267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0907 01:14:07.564371   53267 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:14:07.564838   53267 main.go:141] libmachine: Using API Version  1
	I0907 01:14:07.564857   53267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:14:07.565362   53267 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:14:07.565566   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetState
	I0907 01:14:07.567101   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:14:07.567291   53267 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-965889" context rescaled to 1 replicas
	I0907 01:14:07.567315   53267 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:14:07.569119   53267 out.go:177] * Verifying Kubernetes components...
	I0907 01:14:07.570650   53267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 01:14:03.256137   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:05.754047   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:07.572049   53267 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:14:07.572066   53267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 01:14:07.572083   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:14:07.570625   53267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:14:07.575064   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:14:07.575687   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:14:07.575708   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:14:07.575741   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:14:07.575914   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:14:07.576046   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:14:07.576142   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:14:07.579170   53267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0907 01:14:07.579589   53267 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:14:07.580072   53267 main.go:141] libmachine: Using API Version  1
	I0907 01:14:07.580091   53267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:14:07.580459   53267 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:14:07.580989   53267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:14:07.581011   53267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:14:07.595591   53267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0907 01:14:07.595969   53267 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:14:07.596644   53267 main.go:141] libmachine: Using API Version  1
	I0907 01:14:07.596660   53267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:14:07.597039   53267 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:14:07.597226   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetState
	I0907 01:14:07.598723   53267 main.go:141] libmachine: (kindnet-965889) Calling .DriverName
	I0907 01:14:07.599076   53267 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 01:14:07.599088   53267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 01:14:07.599101   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHHostname
	I0907 01:14:07.601725   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:14:07.602126   53267 main.go:141] libmachine: (kindnet-965889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:98:a9", ip: ""} in network mk-kindnet-965889: {Iface:virbr4 ExpiryTime:2023-09-07 02:13:18 +0000 UTC Type:0 Mac:52:54:00:ab:98:a9 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kindnet-965889 Clientid:01:52:54:00:ab:98:a9}
	I0907 01:14:07.602150   53267 main.go:141] libmachine: (kindnet-965889) DBG | domain kindnet-965889 has defined IP address 192.168.50.23 and MAC address 52:54:00:ab:98:a9 in network mk-kindnet-965889
	I0907 01:14:07.602288   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHPort
	I0907 01:14:07.602454   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHKeyPath
	I0907 01:14:07.602642   53267 main.go:141] libmachine: (kindnet-965889) Calling .GetSSHUsername
	I0907 01:14:07.602813   53267 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/kindnet-965889/id_rsa Username:docker}
	I0907 01:14:07.768844   53267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:14:07.787978   53267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 01:14:07.813074   53267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 01:14:07.814343   53267 node_ready.go:35] waiting up to 15m0s for node "kindnet-965889" to be "Ready" ...
	I0907 01:14:08.862703   53267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093823592s)
	I0907 01:14:08.862748   53267 main.go:141] libmachine: Making call to close driver server
	I0907 01:14:08.862763   53267 main.go:141] libmachine: (kindnet-965889) Calling .Close
	I0907 01:14:08.862791   53267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.074766547s)
	I0907 01:14:08.862846   53267 main.go:141] libmachine: Making call to close driver server
	I0907 01:14:08.862864   53267 main.go:141] libmachine: (kindnet-965889) Calling .Close
	I0907 01:14:08.862866   53267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.049764597s)
	I0907 01:14:08.862881   53267 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0907 01:14:08.863198   53267 main.go:141] libmachine: (kindnet-965889) DBG | Closing plugin on server side
	I0907 01:14:08.863218   53267 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:14:08.863233   53267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:14:08.863244   53267 main.go:141] libmachine: Making call to close driver server
	I0907 01:14:08.863254   53267 main.go:141] libmachine: (kindnet-965889) Calling .Close
	I0907 01:14:08.863259   53267 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:14:08.863273   53267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:14:08.863283   53267 main.go:141] libmachine: Making call to close driver server
	I0907 01:14:08.863305   53267 main.go:141] libmachine: (kindnet-965889) Calling .Close
	I0907 01:14:08.863449   53267 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:14:08.863470   53267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:14:08.864775   53267 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:14:08.864785   53267 main.go:141] libmachine: (kindnet-965889) DBG | Closing plugin on server side
	I0907 01:14:08.864788   53267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:14:08.864804   53267 main.go:141] libmachine: Making call to close driver server
	I0907 01:14:08.864821   53267 main.go:141] libmachine: (kindnet-965889) Calling .Close
	I0907 01:14:08.865065   53267 main.go:141] libmachine: Successfully made call to close driver server
	I0907 01:14:08.865087   53267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 01:14:08.865099   53267 main.go:141] libmachine: (kindnet-965889) DBG | Closing plugin on server side
	I0907 01:14:08.867066   53267 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0907 01:14:06.037499   53935 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.133524696s)
	I0907 01:14:06.037542   53935 crio.go:451] Took 3.133655 seconds to extract the tarball
	I0907 01:14:06.037552   53935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 01:14:06.084618   53935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:14:06.146152   53935 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 01:14:06.146178   53935 cache_images.go:84] Images are preloaded, skipping loading
	I0907 01:14:06.146248   53935 ssh_runner.go:195] Run: crio config
	I0907 01:14:06.211225   53935 cni.go:84] Creating CNI manager for "calico"
	I0907 01:14:06.211291   53935 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 01:14:06.211317   53935 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-965889 NodeName:calico-965889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 01:14:06.211507   53935 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-965889"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 01:14:06.211595   53935 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=calico-965889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:calico-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0907 01:14:06.211665   53935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 01:14:06.221487   53935 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 01:14:06.221545   53935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 01:14:06.230841   53935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0907 01:14:06.248732   53935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 01:14:06.267497   53935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0907 01:14:06.285322   53935 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0907 01:14:06.289652   53935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:14:06.303377   53935 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889 for IP: 192.168.72.94
	I0907 01:14:06.303409   53935 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:06.303562   53935 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 01:14:06.303619   53935 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 01:14:06.303689   53935 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/client.key
	I0907 01:14:06.303709   53935 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key.90452619
	I0907 01:14:06.303722   53935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt.90452619 with IP's: [192.168.72.94 10.96.0.1 127.0.0.1 10.0.0.1]
	I0907 01:14:06.599178   53935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt.90452619 ...
	I0907 01:14:06.599204   53935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt.90452619: {Name:mk24be66cda277fc6247ab05dc586e2eac99309d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:06.599385   53935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key.90452619 ...
	I0907 01:14:06.599398   53935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key.90452619: {Name:mk488c3a5130312fbf1b41dffbaad8a6068c8172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:06.599494   53935 certs.go:337] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt.90452619 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt
	I0907 01:14:06.599555   53935 certs.go:341] copying /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key.90452619 -> /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key
	I0907 01:14:06.599601   53935 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.key
	I0907 01:14:06.599614   53935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.crt with IP's: []
	I0907 01:14:06.723432   53935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.crt ...
	I0907 01:14:06.723558   53935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.crt: {Name:mk563562da3a274fffa3a935532e15aefab92fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:06.723754   53935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.key ...
	I0907 01:14:06.723766   53935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.key: {Name:mk4710ceb80ec9f9674e02cc957a1e08b7b264fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:14:06.723951   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 01:14:06.723988   53935 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 01:14:06.723998   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 01:14:06.724019   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 01:14:06.724046   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 01:14:06.724067   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 01:14:06.724102   53935 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 01:14:06.724652   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 01:14:06.753141   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 01:14:06.779205   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 01:14:06.805450   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/calico-965889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 01:14:06.831242   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 01:14:06.857036   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 01:14:06.881292   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 01:14:06.910218   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 01:14:06.936961   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 01:14:06.961092   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 01:14:06.986090   53935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 01:14:07.011738   53935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 01:14:07.030218   53935 ssh_runner.go:195] Run: openssl version
	I0907 01:14:07.036739   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 01:14:07.049130   53935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 01:14:07.054749   53935 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 01:14:07.054832   53935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 01:14:07.060672   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 01:14:07.071676   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 01:14:07.082708   53935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:14:07.088035   53935 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:14:07.088097   53935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:14:07.094276   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 01:14:07.104672   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 01:14:07.115340   53935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 01:14:07.120152   53935 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 01:14:07.120213   53935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 01:14:07.125969   53935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 01:14:07.138552   53935 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 01:14:07.144037   53935 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0907 01:14:07.144092   53935 kubeadm.go:404] StartCluster: {Name:calico-965889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:calico-965889 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:14:07.144191   53935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 01:14:07.144240   53935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 01:14:07.177901   53935 cri.go:89] found id: ""
	I0907 01:14:07.177963   53935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 01:14:07.189676   53935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 01:14:07.201180   53935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 01:14:07.210690   53935 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 01:14:07.210734   53935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 01:14:07.430190   53935 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 01:14:08.869249   53267 addons.go:502] enable addons completed in 1.345344717s: enabled=[storage-provisioner default-storageclass]
	I0907 01:14:09.933042   53267 node_ready.go:58] node "kindnet-965889" has status "Ready":"False"
	I0907 01:14:08.254769   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:10.258677   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:12.753668   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:12.433471   53267 node_ready.go:58] node "kindnet-965889" has status "Ready":"False"
	I0907 01:14:13.933499   53267 node_ready.go:49] node "kindnet-965889" has status "Ready":"True"
	I0907 01:14:13.933529   53267 node_ready.go:38] duration metric: took 6.119157082s waiting for node "kindnet-965889" to be "Ready" ...
	I0907 01:14:13.933539   53267 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 01:14:13.941422   53267 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-dfvhx" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.976365   53267 pod_ready.go:102] pod "coredns-5dd5756b68-dfvhx" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:14.753822   52813 pod_ready.go:102] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"False"
	I0907 01:14:15.754191   52813 pod_ready.go:92] pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:15.754272   52813 pod_ready.go:81] duration metric: took 40.037372335s waiting for pod "coredns-5dd5756b68-cq8zf" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.754308   52813 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-s7m2r" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.757598   52813 pod_ready.go:97] error getting pod "coredns-5dd5756b68-s7m2r" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-s7m2r" not found
	I0907 01:14:15.757629   52813 pod_ready.go:81] duration metric: took 3.295363ms waiting for pod "coredns-5dd5756b68-s7m2r" in "kube-system" namespace to be "Ready" ...
	E0907 01:14:15.757641   52813 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-s7m2r" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-s7m2r" not found
	I0907 01:14:15.757651   52813 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.764830   52813 pod_ready.go:92] pod "etcd-auto-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:15.764856   52813 pod_ready.go:81] duration metric: took 7.19684ms waiting for pod "etcd-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.764867   52813 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.771022   52813 pod_ready.go:92] pod "kube-apiserver-auto-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:15.771042   52813 pod_ready.go:81] duration metric: took 6.167165ms waiting for pod "kube-apiserver-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.771053   52813 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.777219   52813 pod_ready.go:92] pod "kube-controller-manager-auto-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:15.777237   52813 pod_ready.go:81] duration metric: took 6.17716ms waiting for pod "kube-controller-manager-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.777245   52813 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-q25b5" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.949634   52813 pod_ready.go:92] pod "kube-proxy-q25b5" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:15.949659   52813 pod_ready.go:81] duration metric: took 172.407906ms waiting for pod "kube-proxy-q25b5" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:15.949672   52813 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:16.349307   52813 pod_ready.go:92] pod "kube-scheduler-auto-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:16.349336   52813 pod_ready.go:81] duration metric: took 399.656254ms waiting for pod "kube-scheduler-auto-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:16.349347   52813 pod_ready.go:38] duration metric: took 40.666780244s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 01:14:16.349368   52813 api_server.go:52] waiting for apiserver process to appear ...
	I0907 01:14:16.349425   52813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 01:14:16.367574   52813 api_server.go:72] duration metric: took 41.021851922s to wait for apiserver process to appear ...
	I0907 01:14:16.367606   52813 api_server.go:88] waiting for apiserver healthz status ...
	I0907 01:14:16.367626   52813 api_server.go:253] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0907 01:14:16.373872   52813 api_server.go:279] https://192.168.61.35:8443/healthz returned 200:
	ok
	I0907 01:14:16.377252   52813 api_server.go:141] control plane version: v1.28.1
	I0907 01:14:16.377277   52813 api_server.go:131] duration metric: took 9.664013ms to wait for apiserver health ...
	I0907 01:14:16.377287   52813 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 01:14:16.555624   52813 system_pods.go:59] 7 kube-system pods found
	I0907 01:14:16.555660   52813 system_pods.go:61] "coredns-5dd5756b68-cq8zf" [d6e58863-fe10-4b3c-a9c1-6ea741f1f4bf] Running
	I0907 01:14:16.555668   52813 system_pods.go:61] "etcd-auto-965889" [5e1344d6-ae1e-460f-a1f2-a80484ce8e21] Running
	I0907 01:14:16.555674   52813 system_pods.go:61] "kube-apiserver-auto-965889" [b1b17731-70a2-4f44-a837-8170a6187ec4] Running
	I0907 01:14:16.555681   52813 system_pods.go:61] "kube-controller-manager-auto-965889" [52c862ff-54d8-4895-b2f3-7cd0f34abad3] Running
	I0907 01:14:16.555690   52813 system_pods.go:61] "kube-proxy-q25b5" [a4d9f522-91ac-42fb-8604-43bade4a6145] Running
	I0907 01:14:16.555696   52813 system_pods.go:61] "kube-scheduler-auto-965889" [223180f8-d104-4881-9264-35a77b1a5cc6] Running
	I0907 01:14:16.555702   52813 system_pods.go:61] "storage-provisioner" [5b8c4e8a-d129-460e-bc24-d530b53afaba] Running
	I0907 01:14:16.555712   52813 system_pods.go:74] duration metric: took 178.417278ms to wait for pod list to return data ...
	I0907 01:14:16.555726   52813 default_sa.go:34] waiting for default service account to be created ...
	I0907 01:14:16.749337   52813 default_sa.go:45] found service account: "default"
	I0907 01:14:16.749360   52813 default_sa.go:55] duration metric: took 193.626801ms for default service account to be created ...
	I0907 01:14:16.749368   52813 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 01:14:16.952024   52813 system_pods.go:86] 7 kube-system pods found
	I0907 01:14:16.952055   52813 system_pods.go:89] "coredns-5dd5756b68-cq8zf" [d6e58863-fe10-4b3c-a9c1-6ea741f1f4bf] Running
	I0907 01:14:16.952060   52813 system_pods.go:89] "etcd-auto-965889" [5e1344d6-ae1e-460f-a1f2-a80484ce8e21] Running
	I0907 01:14:16.952065   52813 system_pods.go:89] "kube-apiserver-auto-965889" [b1b17731-70a2-4f44-a837-8170a6187ec4] Running
	I0907 01:14:16.952069   52813 system_pods.go:89] "kube-controller-manager-auto-965889" [52c862ff-54d8-4895-b2f3-7cd0f34abad3] Running
	I0907 01:14:16.952073   52813 system_pods.go:89] "kube-proxy-q25b5" [a4d9f522-91ac-42fb-8604-43bade4a6145] Running
	I0907 01:14:16.952077   52813 system_pods.go:89] "kube-scheduler-auto-965889" [223180f8-d104-4881-9264-35a77b1a5cc6] Running
	I0907 01:14:16.952081   52813 system_pods.go:89] "storage-provisioner" [5b8c4e8a-d129-460e-bc24-d530b53afaba] Running
	I0907 01:14:16.952087   52813 system_pods.go:126] duration metric: took 202.714845ms to wait for k8s-apps to be running ...
	I0907 01:14:16.952093   52813 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 01:14:16.952133   52813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:14:16.969248   52813 system_svc.go:56] duration metric: took 17.143163ms WaitForService to wait for kubelet.
	I0907 01:14:16.969277   52813 kubeadm.go:581] duration metric: took 41.623559349s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 01:14:16.969300   52813 node_conditions.go:102] verifying NodePressure condition ...
	I0907 01:14:17.150392   52813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 01:14:17.150430   52813 node_conditions.go:123] node cpu capacity is 2
	I0907 01:14:17.150443   52813 node_conditions.go:105] duration metric: took 181.139498ms to run NodePressure ...
	I0907 01:14:17.150454   52813 start.go:228] waiting for startup goroutines ...
	I0907 01:14:17.150460   52813 start.go:233] waiting for cluster config update ...
	I0907 01:14:17.150469   52813 start.go:242] writing updated cluster config ...
	I0907 01:14:17.150734   52813 ssh_runner.go:195] Run: rm -f paused
	I0907 01:14:17.200596   52813 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 01:14:17.203492   52813 out.go:177] * Done! kubectl is now configured to use "auto-965889" cluster and "default" namespace by default
	I0907 01:14:17.465835   53267 pod_ready.go:92] pod "coredns-5dd5756b68-dfvhx" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.465866   53267 pod_ready.go:81] duration metric: took 3.524422262s waiting for pod "coredns-5dd5756b68-dfvhx" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.465879   53267 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.480370   53267 pod_ready.go:92] pod "etcd-kindnet-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.480388   53267 pod_ready.go:81] duration metric: took 14.50295ms waiting for pod "etcd-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.480400   53267 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.491032   53267 pod_ready.go:92] pod "kube-apiserver-kindnet-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.491052   53267 pod_ready.go:81] duration metric: took 10.646527ms waiting for pod "kube-apiserver-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.491063   53267 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.498162   53267 pod_ready.go:92] pod "kube-controller-manager-kindnet-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.498192   53267 pod_ready.go:81] duration metric: took 7.121316ms waiting for pod "kube-controller-manager-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.498205   53267 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-qtngl" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.532484   53267 pod_ready.go:92] pod "kube-proxy-qtngl" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.532516   53267 pod_ready.go:81] duration metric: took 34.303064ms waiting for pod "kube-proxy-qtngl" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.532529   53267 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.933464   53267 pod_ready.go:92] pod "kube-scheduler-kindnet-965889" in "kube-system" namespace has status "Ready":"True"
	I0907 01:14:17.933484   53267 pod_ready.go:81] duration metric: took 400.947885ms waiting for pod "kube-scheduler-kindnet-965889" in "kube-system" namespace to be "Ready" ...
	I0907 01:14:17.933494   53267 pod_ready.go:38] duration metric: took 3.99994325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 01:14:17.933509   53267 api_server.go:52] waiting for apiserver process to appear ...
	I0907 01:14:17.933567   53267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 01:14:17.948667   53267 api_server.go:72] duration metric: took 10.381323954s to wait for apiserver process to appear ...
	I0907 01:14:17.948690   53267 api_server.go:88] waiting for apiserver healthz status ...
	I0907 01:14:17.948705   53267 api_server.go:253] Checking apiserver healthz at https://192.168.50.23:8443/healthz ...
	I0907 01:14:17.957054   53267 api_server.go:279] https://192.168.50.23:8443/healthz returned 200:
	ok
	I0907 01:14:17.959052   53267 api_server.go:141] control plane version: v1.28.1
	I0907 01:14:17.959079   53267 api_server.go:131] duration metric: took 10.382339ms to wait for apiserver health ...
	I0907 01:14:17.959089   53267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 01:14:18.143088   53267 system_pods.go:59] 8 kube-system pods found
	I0907 01:14:18.143116   53267 system_pods.go:61] "coredns-5dd5756b68-dfvhx" [20de080c-7586-4200-8bd3-697acf40c57e] Running
	I0907 01:14:18.143122   53267 system_pods.go:61] "etcd-kindnet-965889" [335dd751-e46b-4cf9-b928-8de8e4b3b0ab] Running
	I0907 01:14:18.143126   53267 system_pods.go:61] "kindnet-dkttl" [b365e693-4da9-4a03-a263-1d21a52edd36] Running
	I0907 01:14:18.143130   53267 system_pods.go:61] "kube-apiserver-kindnet-965889" [e9d19744-41bb-4577-8cea-b8b8bd489f7a] Running
	I0907 01:14:18.143135   53267 system_pods.go:61] "kube-controller-manager-kindnet-965889" [1987837e-55b0-46eb-a60e-377a26fa3615] Running
	I0907 01:14:18.143138   53267 system_pods.go:61] "kube-proxy-qtngl" [4099c2e7-2e22-4d69-82dd-7c47ec410a99] Running
	I0907 01:14:18.143144   53267 system_pods.go:61] "kube-scheduler-kindnet-965889" [270c9cd2-e47e-4575-a772-50809851a793] Running
	I0907 01:14:18.143150   53267 system_pods.go:61] "storage-provisioner" [a3418cbd-3eaf-462e-a03f-7d646550500d] Running
	I0907 01:14:18.143157   53267 system_pods.go:74] duration metric: took 184.062453ms to wait for pod list to return data ...
	I0907 01:14:18.143165   53267 default_sa.go:34] waiting for default service account to be created ...
	I0907 01:14:18.333046   53267 default_sa.go:45] found service account: "default"
	I0907 01:14:18.333068   53267 default_sa.go:55] duration metric: took 189.897581ms for default service account to be created ...
	I0907 01:14:18.333078   53267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 01:14:18.537091   53267 system_pods.go:86] 8 kube-system pods found
	I0907 01:14:18.537126   53267 system_pods.go:89] "coredns-5dd5756b68-dfvhx" [20de080c-7586-4200-8bd3-697acf40c57e] Running
	I0907 01:14:18.537135   53267 system_pods.go:89] "etcd-kindnet-965889" [335dd751-e46b-4cf9-b928-8de8e4b3b0ab] Running
	I0907 01:14:18.537142   53267 system_pods.go:89] "kindnet-dkttl" [b365e693-4da9-4a03-a263-1d21a52edd36] Running
	I0907 01:14:18.537149   53267 system_pods.go:89] "kube-apiserver-kindnet-965889" [e9d19744-41bb-4577-8cea-b8b8bd489f7a] Running
	I0907 01:14:18.537156   53267 system_pods.go:89] "kube-controller-manager-kindnet-965889" [1987837e-55b0-46eb-a60e-377a26fa3615] Running
	I0907 01:14:18.537162   53267 system_pods.go:89] "kube-proxy-qtngl" [4099c2e7-2e22-4d69-82dd-7c47ec410a99] Running
	I0907 01:14:18.537169   53267 system_pods.go:89] "kube-scheduler-kindnet-965889" [270c9cd2-e47e-4575-a772-50809851a793] Running
	I0907 01:14:18.537175   53267 system_pods.go:89] "storage-provisioner" [a3418cbd-3eaf-462e-a03f-7d646550500d] Running
	I0907 01:14:18.537184   53267 system_pods.go:126] duration metric: took 204.100683ms to wait for k8s-apps to be running ...
	I0907 01:14:18.537197   53267 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 01:14:18.537249   53267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:14:18.554656   53267 system_svc.go:56] duration metric: took 17.449897ms WaitForService to wait for kubelet.
	I0907 01:14:18.554683   53267 kubeadm.go:581] duration metric: took 10.987341449s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 01:14:18.554706   53267 node_conditions.go:102] verifying NodePressure condition ...
	I0907 01:14:18.733439   53267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 01:14:18.733475   53267 node_conditions.go:123] node cpu capacity is 2
	I0907 01:14:18.733497   53267 node_conditions.go:105] duration metric: took 178.786337ms to run NodePressure ...
	I0907 01:14:18.733512   53267 start.go:228] waiting for startup goroutines ...
	I0907 01:14:18.733520   53267 start.go:233] waiting for cluster config update ...
	I0907 01:14:18.733533   53267 start.go:242] writing updated cluster config ...
	I0907 01:14:18.733843   53267 ssh_runner.go:195] Run: rm -f paused
	I0907 01:14:18.797166   53267 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 01:14:18.798898   53267 out.go:177] * Done! kubectl is now configured to use "kindnet-965889" cluster and "default" namespace by default
	I0907 01:14:20.292782   53935 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 01:14:20.292852   53935 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 01:14:20.292961   53935 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 01:14:20.293085   53935 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 01:14:20.293207   53935 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 01:14:20.293289   53935 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 01:14:20.294940   53935 out.go:204]   - Generating certificates and keys ...
	I0907 01:14:20.295068   53935 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 01:14:20.295161   53935 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 01:14:20.295259   53935 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 01:14:20.295334   53935 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0907 01:14:20.295408   53935 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0907 01:14:20.295489   53935 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0907 01:14:20.295574   53935 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0907 01:14:20.295730   53935 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-965889 localhost] and IPs [192.168.72.94 127.0.0.1 ::1]
	I0907 01:14:20.295804   53935 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0907 01:14:20.295953   53935 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-965889 localhost] and IPs [192.168.72.94 127.0.0.1 ::1]
	I0907 01:14:20.296040   53935 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 01:14:20.296122   53935 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 01:14:20.296182   53935 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0907 01:14:20.296247   53935 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 01:14:20.296312   53935 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 01:14:20.296381   53935 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 01:14:20.296463   53935 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 01:14:20.296539   53935 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 01:14:20.296644   53935 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 01:14:20.296734   53935 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 01:14:20.298313   53935 out.go:204]   - Booting up control plane ...
	I0907 01:14:20.298430   53935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 01:14:20.298547   53935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 01:14:20.298645   53935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 01:14:20.298792   53935 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 01:14:20.298904   53935 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 01:14:20.298952   53935 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 01:14:20.299148   53935 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 01:14:20.299247   53935 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503241 seconds
	I0907 01:14:20.299393   53935 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 01:14:20.299547   53935 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 01:14:20.299623   53935 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 01:14:20.299852   53935 kubeadm.go:322] [mark-control-plane] Marking the node calico-965889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 01:14:20.299922   53935 kubeadm.go:322] [bootstrap-token] Using token: y07j3b.us5br8tv4ku961pe
	I0907 01:14:20.301297   53935 out.go:204]   - Configuring RBAC rules ...
	I0907 01:14:20.301432   53935 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 01:14:20.301562   53935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 01:14:20.301742   53935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 01:14:20.301902   53935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 01:14:20.302059   53935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 01:14:20.302182   53935 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 01:14:20.302325   53935 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 01:14:20.302389   53935 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 01:14:20.302452   53935 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 01:14:20.302463   53935 kubeadm.go:322] 
	I0907 01:14:20.302548   53935 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 01:14:20.302559   53935 kubeadm.go:322] 
	I0907 01:14:20.302650   53935 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 01:14:20.302657   53935 kubeadm.go:322] 
	I0907 01:14:20.302688   53935 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 01:14:20.302760   53935 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 01:14:20.302846   53935 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 01:14:20.302856   53935 kubeadm.go:322] 
	I0907 01:14:20.302930   53935 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 01:14:20.302940   53935 kubeadm.go:322] 
	I0907 01:14:20.303043   53935 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 01:14:20.303072   53935 kubeadm.go:322] 
	I0907 01:14:20.303143   53935 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 01:14:20.303244   53935 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 01:14:20.303334   53935 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 01:14:20.303340   53935 kubeadm.go:322] 
	I0907 01:14:20.303463   53935 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 01:14:20.303592   53935 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 01:14:20.303608   53935 kubeadm.go:322] 
	I0907 01:14:20.303739   53935 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y07j3b.us5br8tv4ku961pe \
	I0907 01:14:20.303890   53935 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 01:14:20.303939   53935 kubeadm.go:322] 	--control-plane 
	I0907 01:14:20.303952   53935 kubeadm.go:322] 
	I0907 01:14:20.304068   53935 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 01:14:20.304078   53935 kubeadm.go:322] 
	I0907 01:14:20.304183   53935 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y07j3b.us5br8tv4ku961pe \
	I0907 01:14:20.304350   53935 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 01:14:20.304366   53935 cni.go:84] Creating CNI manager for "calico"
	I0907 01:14:20.306828   53935 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0907 01:14:20.310910   53935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0907 01:14:20.311160   53935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (244810 bytes)
	I0907 01:14:20.346636   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 01:14:23.188332   53935 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.841662102s)
	I0907 01:14:23.188373   53935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 01:14:23.188482   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=calico-965889 minikube.k8s.io/updated_at=2023_09_07T01_14_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:23.188523   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:23.332314   53935 ops.go:34] apiserver oom_adj: -16
	I0907 01:14:23.332347   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:23.438582   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:24.031385   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:24.530918   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:14:25.031552   53935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:24 UTC, ends at Thu 2023-09-07 01:14:29 UTC. --
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.400021527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f2be634-2c8d-4526-9d52-b4af2fe84663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.400283123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f2be634-2c8d-4526-9d52-b4af2fe84663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.444298981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a70008d2-d817-4c67-bf49-caa4e5fb54ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.444386221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a70008d2-d817-4c67-bf49-caa4e5fb54ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.444592640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a70008d2-d817-4c67-bf49-caa4e5fb54ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.451754420Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=ff606c82-fd23-419e-8be1-39a545062303 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.452104879Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5fd80493-eaa4-4576-b185-e4544930616c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047927273562709,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307202608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wdnpc,Uid:98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169404
7926973383965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307210538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7343017645a3b3f79206b5070b251a826e57b55aa3282563e8b652bacadd391b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-2w2m6,Uid:70d0ed87-ab6c-4f43-b12d-4730244d67db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047924928896593,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-2w2m6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70d0ed87-ab6c-4f43-b12d-4730244d67db,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07
T00:51:59.307225811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&PodSandboxMetadata{Name:kube-proxy-5bh7n,Uid:28b4df63-f3db-4544-ab5d-54a021be48bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919716687702,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28b4df63-f3db-4544-ab5d-54a021be48bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-07T00:51:59.307222561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54e9c6d3-3c07-4afe-94cd-e57f83ba3152,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047919681346109,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-07T00:51:59.307214804Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-773466,Uid:4cac465f33f5c79f9d0221b16fad139b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912839147827,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cac465f33f5c79f9d0221b16fad139b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.96:2379,kubernetes.io/config.hash: 4cac465f33f5c79f9d0221b16fad139b,kubernetes.io/config.seen: 2023-09-07T00:51:52.297830059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&PodSandboxMetadata{Name:k
ube-apiserver-default-k8s-diff-port-773466,Uid:9c667ef6664b0c4031e2445ab302b1ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912832606931,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c667ef6664b0c4031e2445ab302b1ac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.96:8444,kubernetes.io/config.hash: 9c667ef6664b0c4031e2445ab302b1ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297831004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-773466,Uid:2ff67be2492143e50f19261845f2b3bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912810649496,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ff67be2492143e50f19261845f2b3bf,kubernetes.io/config.seen: 2023-09-07T00:51:52.297824881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-773466,Uid:5dbc3cb98b05a56f58e47c0d93f0d7ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694047912797493209,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbc3cb98b05a56f58e47c0d93f0d7ac,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 5dbc3cb98b05a56f58e47c0d93f0d7ac,kubernetes.io/config.seen: 2023-09-07T00:51:52.297828966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=ff606c82-fd23-419e-8be1-39a545062303 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.453401234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e842899-a722-4105-901b-6fd239d28d87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.453453694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e842899-a722-4105-901b-6fd239d28d87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.453647957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e842899-a722-4105-901b-6fd239d28d87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.497182460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a53f50f4-dce1-4a35-ac3e-42756fa6de35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.497272095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a53f50f4-dce1-4a35-ac3e-42756fa6de35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.497616698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a53f50f4-dce1-4a35-ac3e-42756fa6de35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.539501590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cdc72aab-f8d2-410c-8660-94138bb2c4c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.539594620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cdc72aab-f8d2-410c-8660-94138bb2c4c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.539819026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cdc72aab-f8d2-410c-8660-94138bb2c4c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.582850171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56ffe56e-2572-4af0-98d9-7620159eccd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.583034104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56ffe56e-2572-4af0-98d9-7620159eccd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.583346947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56ffe56e-2572-4af0-98d9-7620159eccd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.625723140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e243a5a6-198a-4b6f-97e7-dd7d6e6f7d65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.625832775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e243a5a6-198a-4b6f-97e7-dd7d6e6f7d65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.626236266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e243a5a6-198a-4b6f-97e7-dd7d6e6f7d65 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.661273704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c7ba351-1149-44bb-9927-209dedc045c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.661372625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c7ba351-1149-44bb-9927-209dedc045c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:14:29 default-k8s-diff-port-773466 crio[734]: time="2023-09-07 01:14:29.661687496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694047951558077229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a99d9f4d79e52008260a236d34d9cb2cc82eb24091ef82f7724be55a5e215410,PodSandboxId:bbdf1a69d21dc2a6f1193f405d17227a9f1bcb72fd9f809e1f4b7afd38f739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694047931048421433,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5fd80493-eaa4-4576-b185-e4544930616c,},Annotations:map[string]string{io.kubernetes.container.hash: 90942013,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08,PodSandboxId:47d994feeba1026457452095ffb790352896ad1a7bceedc4784b73a05e0836bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694047927650142367,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wdnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e46ef4-ee2b-4d80-9c3c-b1d675142c7f,},Annotations:map[string]string{io.kubernetes.container.hash: 706a20b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c,PodSandboxId:d0699c8de31063cfac160e42c6b30d8a464ae48939c2009c9306de8c938488b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694047920877321956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 54e9c6d3-3c07-4afe-94cd-e57f83ba3152,},Annotations:map[string]string{io.kubernetes.container.hash: 37a3e28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad,PodSandboxId:f2f0fa2c21a791f4377678cb4d0cb754dcc7df71ebef9aaf925724723f773b8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694047920669660638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bh7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
8b4df63-f3db-4544-ab5d-54a021be48bf,},Annotations:map[string]string{io.kubernetes.container.hash: 54680b38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02,PodSandboxId:2fcd735eea5351abe771a8ff24659b89e59225c60e5699231bbb67da37f1ee07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694047914023827433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5dbc3cb98b05a56f58e47c0d93f0d7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704,PodSandboxId:e2d5bd5f133d4abcd5a61a121cb3215fe00947a3e38cd7b0b96ad514e4637fdb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694047913714520767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-773466,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2ff67be2492143e50f19261845f2b3bf,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13,PodSandboxId:636d63364a128104dbd8219910fc440f0ff034a2e587480d9ef296ec6db88a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694047913484568877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
cac465f33f5c79f9d0221b16fad139b,},Annotations:map[string]string{io.kubernetes.container.hash: fbb85e4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0,PodSandboxId:eb837fe5c83c4292e0e4bd6aabb48fe2d2ec46cc147fda80d7afd83a9ced1131,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694047913383232760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-773466,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
c667ef6664b0c4031e2445ab302b1ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2321b166,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c7ba351-1149-44bb-9927-209dedc045c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a7c3d8a195ffd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   d0699c8de3106
	a99d9f4d79e52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   bbdf1a69d21dc
	d28e9dadd44da       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   47d994feeba10
	cdcb5afe48490       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   d0699c8de3106
	0672903c9cfb1       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      22 minutes ago      Running             kube-proxy                1                   f2f0fa2c21a79
	a0f6bff336882       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      22 minutes ago      Running             kube-scheduler            1                   2fcd735eea535
	0692c75701ac7       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      22 minutes ago      Running             kube-controller-manager   1                   e2d5bd5f133d4
	e985c2c9d202b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   636d63364a128
	891a5075955e0       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      22 minutes ago      Running             kube-apiserver            1                   eb837fe5c83c4
	
	* 
	* ==> coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50593 - 24799 "HINFO IN 8877089458389055375.4368464280314516910. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011942331s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-773466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-773466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=default-k8s-diff-port-773466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_45_29_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:45:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-773466
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:12:53 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:12:53 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:12:53 +0000   Thu, 07 Sep 2023 00:45:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:12:53 +0000   Thu, 07 Sep 2023 00:52:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    default-k8s-diff-port-773466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5a5a6de89e84c62bfe1fc623205e445
	  System UUID:                e5a5a6de-89e8-4c62-bfe1-fc623205e445
	  Boot ID:                    0b04f7f7-709b-4666-97bc-70f056534b6c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-wdnpc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-773466                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-773466             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-773466    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-5bh7n                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-773466             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-2w2m6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-773466 event: Registered Node default-k8s-diff-port-773466 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-773466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-773466 event: Registered Node default-k8s-diff-port-773466 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.088210] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557623] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.450528] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.711618] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.595629] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.139596] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.212928] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.118650] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[  +0.277506] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[ +18.097503] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[Sep 7 00:52] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] <==
	* {"level":"warn","ts":"2023-09-07T01:12:44.385432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:12:43.78583Z","time spent":"599.517735ms","remote":"127.0.0.1:36584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" mod_revision:1584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" > >"}
	{"level":"warn","ts":"2023-09-07T01:12:44.385559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.057207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:12:44.385784Z","caller":"traceutil/trace.go:171","msg":"trace[2114396316] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1592; }","duration":"103.283822ms","start":"2023-09-07T01:12:44.282486Z","end":"2023-09-07T01:12:44.38577Z","steps":["trace[2114396316] 'agreement among raft nodes before linearized reading'  (duration: 103.005464ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:12:44.385603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.39864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:12:44.386005Z","caller":"traceutil/trace.go:171","msg":"trace[119934658] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1592; }","duration":"143.792282ms","start":"2023-09-07T01:12:44.242198Z","end":"2023-09-07T01:12:44.38599Z","steps":["trace[119934658] 'agreement among raft nodes before linearized reading'  (duration: 143.390799ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:12:45.078255Z","caller":"traceutil/trace.go:171","msg":"trace[2000901438] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"153.989019ms","start":"2023-09-07T01:12:44.924247Z","end":"2023-09-07T01:12:45.078236Z","steps":["trace[2000901438] 'process raft request'  (duration: 153.883568ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:13:09.216987Z","caller":"traceutil/trace.go:171","msg":"trace[39300916] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"361.455716ms","start":"2023-09-07T01:13:08.85542Z","end":"2023-09-07T01:13:09.216875Z","steps":["trace[39300916] 'process raft request'  (duration: 361.283034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:13:09.21746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:13:08.85536Z","time spent":"362.008933ms","remote":"127.0.0.1:36562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1612 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-09-07T01:13:09.441264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.898275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:13:09.441732Z","caller":"traceutil/trace.go:171","msg":"trace[2020819567] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1613; }","duration":"156.375229ms","start":"2023-09-07T01:13:09.285329Z","end":"2023-09-07T01:13:09.441704Z","steps":["trace[2020819567] 'range keys from in-memory index tree'  (duration: 155.813704ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:13:39.712725Z","caller":"traceutil/trace.go:171","msg":"trace[1000960515] transaction","detail":"{read_only:false; response_revision:1639; number_of_response:1; }","duration":"244.985467ms","start":"2023-09-07T01:13:39.467712Z","end":"2023-09-07T01:13:39.712697Z","steps":["trace[1000960515] 'process raft request'  (duration: 244.773857ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:13:40.464362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.86563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9077157250254883047 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.96\" mod_revision:1631 > success:<request_put:<key:\"/registry/masterleases/192.168.39.96\" value_size:67 lease:9077157250254883045 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.96\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-07T01:13:40.46445Z","caller":"traceutil/trace.go:171","msg":"trace[226637399] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1943; }","duration":"222.787857ms","start":"2023-09-07T01:13:40.241653Z","end":"2023-09-07T01:13:40.464441Z","steps":["trace[226637399] 'read index received'  (duration: 88.629752ms)","trace[226637399] 'applied index is now lower than readState.Index'  (duration: 134.156789ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-07T01:13:40.46451Z","caller":"traceutil/trace.go:171","msg":"trace[1019508132] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"261.953697ms","start":"2023-09-07T01:13:40.202547Z","end":"2023-09-07T01:13:40.464501Z","steps":["trace[1019508132] 'process raft request'  (duration: 127.775142ms)","trace[1019508132] 'compare'  (duration: 133.423495ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:13:40.464751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.108784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:13:40.464783Z","caller":"traceutil/trace.go:171","msg":"trace[1747667985] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1640; }","duration":"223.142898ms","start":"2023-09-07T01:13:40.241628Z","end":"2023-09-07T01:13:40.464771Z","steps":["trace[1747667985] 'agreement among raft nodes before linearized reading'  (duration: 223.027013ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:13:40.465064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.898113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:13:40.465158Z","caller":"traceutil/trace.go:171","msg":"trace[1876500149] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1640; }","duration":"183.998923ms","start":"2023-09-07T01:13:40.281148Z","end":"2023-09-07T01:13:40.465147Z","steps":["trace[1876500149] 'agreement among raft nodes before linearized reading'  (duration: 183.718359ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-07T01:14:06.639437Z","caller":"traceutil/trace.go:171","msg":"trace[1204209252] transaction","detail":"{read_only:false; response_revision:1660; number_of_response:1; }","duration":"254.550279ms","start":"2023-09-07T01:14:06.384787Z","end":"2023-09-07T01:14:06.639337Z","steps":["trace[1204209252] 'process raft request'  (duration: 254.382974ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:14:06.884863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.555515ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9077157250254883176 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" mod_revision:1652 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-07T01:14:06.886572Z","caller":"traceutil/trace.go:171","msg":"trace[1531197765] linearizableReadLoop","detail":"{readStateIndex:1970; appliedIndex:1969; }","duration":"103.806843ms","start":"2023-09-07T01:14:06.782748Z","end":"2023-09-07T01:14:06.886555Z","steps":["trace[1531197765] 'read index received'  (duration: 97.957µs)","trace[1531197765] 'applied index is now lower than readState.Index'  (duration: 103.707029ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-07T01:14:06.886645Z","caller":"traceutil/trace.go:171","msg":"trace[1030534730] transaction","detail":"{read_only:false; response_revision:1661; number_of_response:1; }","duration":"396.421421ms","start":"2023-09-07T01:14:06.4902Z","end":"2023-09-07T01:14:06.886622Z","steps":["trace[1030534730] 'process raft request'  (duration: 243.924812ms)","trace[1030534730] 'compare'  (duration: 150.176089ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-07T01:14:06.886675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.962441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-07T01:14:06.886784Z","caller":"traceutil/trace.go:171","msg":"trace[1122486195] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1661; }","duration":"104.080252ms","start":"2023-09-07T01:14:06.782696Z","end":"2023-09-07T01:14:06.886776Z","steps":["trace[1122486195] 'agreement among raft nodes before linearized reading'  (duration: 103.9399ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:14:06.886769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:14:06.490181Z","time spent":"396.536263ms","remote":"127.0.0.1:36584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" mod_revision:1652 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-773466\" > >"}
	
	* 
	* ==> kernel <==
	*  01:14:30 up 23 min,  0 users,  load average: 0.19, 0.16, 0.13
	Linux default-k8s-diff-port-773466 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:58.594896       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:11:58.595014       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:11:59.595595       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:59.595662       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:11:59.595669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:11:59.595895       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:59.596218       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:59.597560       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:12:44.386650       1 trace.go:236] Trace[2093854601]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ce79aa56-30c1-439d-a996-7ac2ecbd6c86,client:192.168.39.96,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-diff-port-773466,user-agent:kubelet/v1.28.1 (linux/amd64) kubernetes/8dc49c4,verb:PUT (07-Sep-2023 01:12:43.784) (total time: 602ms):
	Trace[2093854601]: ["GuaranteedUpdate etcd3" audit-id:ce79aa56-30c1-439d-a996-7ac2ecbd6c86,key:/leases/kube-node-lease/default-k8s-diff-port-773466,type:*coordination.Lease,resource:leases.coordination.k8s.io 601ms (01:12:43.784)
	Trace[2093854601]:  ---"Txn call completed" 601ms (01:12:44.386)]
	Trace[2093854601]: [602.237987ms] [602.237987ms] END
	I0907 01:12:58.420153       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:12:58.420218       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:12:59.595879       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:59.596036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:12:59.596074       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:12:59.598381       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:59.598570       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:12:59.598611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:13:58.420216       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.204.192:443: connect: connection refused
	I0907 01:13:58.420282       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] <==
	* I0907 01:08:42.202148       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:11.754283       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:12.210745       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:41.760452       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:42.220601       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:11.767074       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:12.229542       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:41.772697       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:42.237176       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:11.782342       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:12.248089       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:41.789768       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:42.263866       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:12:11.795456       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:12:12.273387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:12:41.804538       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:12:42.282742       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:13:11.828427       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:13:12.299211       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:13:21.351781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="284.192µs"
	I0907 01:13:33.353402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="492.981µs"
	E0907 01:13:41.833946       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:13:42.312301       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:14:11.840766       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:14:12.323257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] <==
	* I0907 00:52:01.001807       1 server_others.go:69] "Using iptables proxy"
	I0907 00:52:01.021002       1 node.go:141] Successfully retrieved node IP: 192.168.39.96
	I0907 00:52:01.119184       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:52:01.119232       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:52:01.122060       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:52:01.122128       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:52:01.122276       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:52:01.122322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:52:01.125198       1 config.go:315] "Starting node config controller"
	I0907 00:52:01.125249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:52:01.132725       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:52:01.134105       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:52:01.133896       1 config.go:188] "Starting service config controller"
	I0907 00:52:01.134334       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:52:01.226255       1 shared_informer.go:318] Caches are synced for node config
	I0907 00:52:01.234859       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:52:01.235138       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] <==
	* I0907 00:51:55.936413       1 serving.go:348] Generated self-signed cert in-memory
	W0907 00:51:58.483035       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:51:58.483192       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:51:58.483225       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:51:58.483341       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:51:58.574183       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0907 00:51:58.574292       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:51:58.581406       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0907 00:51:58.581525       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:51:58.588357       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:51:58.581546       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0907 00:51:58.690010       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:24 UTC, ends at Thu 2023-09-07 01:14:30 UTC. --
	Sep 07 01:11:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:11:52.382736     942 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 07 01:11:53 default-k8s-diff-port-773466 kubelet[942]: E0907 01:11:53.335293     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:12:06 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:06.334426     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:12:19 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:19.334342     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:12:33 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:33.334861     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:12:44 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:44.335041     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:12:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:52.350234     942 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:12:52 default-k8s-diff-port-773466 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:12:52 default-k8s-diff-port-773466 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:12:52 default-k8s-diff-port-773466 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:12:55 default-k8s-diff-port-773466 kubelet[942]: E0907 01:12:55.334439     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:13:06 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:06.349362     942 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 07 01:13:06 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:06.349511     942 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 07 01:13:06 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:06.349863     942 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rlt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-2w2m6_kube-system(70d0ed87-ab6c-4f43-b12d-4730244d67db): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:13:06 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:06.350072     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:13:21 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:21.334464     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:13:33 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:33.334481     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:13:48 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:48.335200     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:13:52 default-k8s-diff-port-773466 kubelet[942]: E0907 01:13:52.350264     942 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:13:52 default-k8s-diff-port-773466 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:13:52 default-k8s-diff-port-773466 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:13:52 default-k8s-diff-port-773466 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:14:03 default-k8s-diff-port-773466 kubelet[942]: E0907 01:14:03.335646     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:14:14 default-k8s-diff-port-773466 kubelet[942]: E0907 01:14:14.334803     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	Sep 07 01:14:26 default-k8s-diff-port-773466 kubelet[942]: E0907 01:14:26.335383     942 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-2w2m6" podUID="70d0ed87-ab6c-4f43-b12d-4730244d67db"
	
	* 
	* ==> storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] <==
	* I0907 00:52:31.678683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:52:31.694211       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:52:31.694308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:52:49.097407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:52:49.097853       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7!
	I0907 00:52:49.098588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c535dee-bad9-476a-b4c2-f4ef696ff918", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7 became leader
	I0907 00:52:49.198212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-773466_4c75def1-2a84-4d43-adcc-210737c5e2f7!
	
	* 
	* ==> storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] <==
	* I0907 00:52:01.071548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0907 00:52:31.075651       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
E0907 01:14:30.741180   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-2w2m6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6: exit status 1 (71.485044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-2w2m6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-773466 describe pod metrics-server-57f55c9bc5-2w2m6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (396.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0907 01:06:17.593220   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 01:06:24.847105   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-321164 -n no-preload-321164
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:12:24.83718238 +0000 UTC m=+5683.558637933
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-321164 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-321164 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.879µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-321164 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-321164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-321164 logs -n 25: (1.224526756s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 01:10 UTC | 07 Sep 23 01:11 UTC |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:11 UTC | 07 Sep 23 01:12 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-294457             | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-294457                                   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-294457                  | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC | 07 Sep 23 01:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-294457 --memory=2200 --alsologtostderr   | newest-cni-294457            | jenkins | v1.31.2 | 07 Sep 23 01:12 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 01:12:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 01:12:16.080470   52488 out.go:296] Setting OutFile to fd 1 ...
	I0907 01:12:16.080585   52488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:12:16.080593   52488 out.go:309] Setting ErrFile to fd 2...
	I0907 01:12:16.080597   52488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 01:12:16.080787   52488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 01:12:16.081368   52488 out.go:303] Setting JSON to false
	I0907 01:12:16.082393   52488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6880,"bootTime":1694042256,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 01:12:16.082460   52488 start.go:138] virtualization: kvm guest
	I0907 01:12:16.084672   52488 out.go:177] * [newest-cni-294457] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 01:12:16.086277   52488 notify.go:220] Checking for updates...
	I0907 01:12:16.086284   52488 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 01:12:16.087905   52488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 01:12:16.089405   52488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 01:12:16.090930   52488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 01:12:16.092878   52488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 01:12:16.094377   52488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 01:12:16.096188   52488 config.go:182] Loaded profile config "newest-cni-294457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 01:12:16.096598   52488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:12:16.096653   52488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:12:16.111971   52488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0907 01:12:16.112395   52488 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:12:16.112978   52488 main.go:141] libmachine: Using API Version  1
	I0907 01:12:16.113009   52488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:12:16.113325   52488 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:12:16.113590   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:16.113893   52488 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 01:12:16.114255   52488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:12:16.114290   52488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:12:16.129539   52488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0907 01:12:16.130031   52488 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:12:16.130645   52488 main.go:141] libmachine: Using API Version  1
	I0907 01:12:16.130673   52488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:12:16.131004   52488 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:12:16.131211   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:16.166853   52488 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 01:12:16.168142   52488 start.go:298] selected driver: kvm2
	I0907 01:12:16.168154   52488 start.go:902] validating driver "kvm2" against &{Name:newest-cni-294457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-294457 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:fal
se system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:12:16.168289   52488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 01:12:16.169001   52488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:12:16.169077   52488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 01:12:16.183573   52488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 01:12:16.183978   52488 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0907 01:12:16.184020   52488 cni.go:84] Creating CNI manager for ""
	I0907 01:12:16.184030   52488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 01:12:16.184045   52488 start_flags.go:321] config:
	{Name:newest-cni-294457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-294457 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 01:12:16.184238   52488 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:12:16.186173   52488 out.go:177] * Starting control plane node newest-cni-294457 in cluster newest-cni-294457
	I0907 01:12:16.187597   52488 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 01:12:16.187639   52488 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 01:12:16.187656   52488 cache.go:57] Caching tarball of preloaded images
	I0907 01:12:16.187731   52488 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 01:12:16.187743   52488 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 01:12:16.187872   52488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/newest-cni-294457/config.json ...
	I0907 01:12:16.188141   52488 start.go:365] acquiring machines lock for newest-cni-294457: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 01:12:16.188199   52488 start.go:369] acquired machines lock for "newest-cni-294457" in 31.076µs
	I0907 01:12:16.188216   52488 start.go:96] Skipping create...Using existing machine configuration
	I0907 01:12:16.188230   52488 fix.go:54] fixHost starting: 
	I0907 01:12:16.188508   52488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 01:12:16.188547   52488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 01:12:16.204552   52488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I0907 01:12:16.205080   52488 main.go:141] libmachine: () Calling .GetVersion
	I0907 01:12:16.205675   52488 main.go:141] libmachine: Using API Version  1
	I0907 01:12:16.205704   52488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 01:12:16.206019   52488 main.go:141] libmachine: () Calling .GetMachineName
	I0907 01:12:16.206189   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	I0907 01:12:16.206365   52488 main.go:141] libmachine: (newest-cni-294457) Calling .GetState
	I0907 01:12:16.208044   52488 fix.go:102] recreateIfNeeded on newest-cni-294457: state=Stopped err=<nil>
	I0907 01:12:16.208082   52488 main.go:141] libmachine: (newest-cni-294457) Calling .DriverName
	W0907 01:12:16.208249   52488 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 01:12:16.210492   52488 out.go:177] * Restarting existing kvm2 VM for "newest-cni-294457" ...
	I0907 01:12:16.211983   52488 main.go:141] libmachine: (newest-cni-294457) Calling .Start
	I0907 01:12:16.212169   52488 main.go:141] libmachine: (newest-cni-294457) Ensuring networks are active...
	I0907 01:12:16.213077   52488 main.go:141] libmachine: (newest-cni-294457) Ensuring network default is active
	I0907 01:12:16.213438   52488 main.go:141] libmachine: (newest-cni-294457) Ensuring network mk-newest-cni-294457 is active
	I0907 01:12:16.213862   52488 main.go:141] libmachine: (newest-cni-294457) Getting domain xml...
	I0907 01:12:16.214598   52488 main.go:141] libmachine: (newest-cni-294457) Creating domain...
	I0907 01:12:17.529027   52488 main.go:141] libmachine: (newest-cni-294457) Waiting to get IP...
	I0907 01:12:17.529939   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:17.530526   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:17.530602   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:17.530496   52523 retry.go:31] will retry after 198.112871ms: waiting for machine to come up
	I0907 01:12:17.729974   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:17.730476   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:17.730532   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:17.730449   52523 retry.go:31] will retry after 278.787467ms: waiting for machine to come up
	I0907 01:12:18.010853   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:18.011246   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:18.011276   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:18.011198   52523 retry.go:31] will retry after 428.161668ms: waiting for machine to come up
	I0907 01:12:18.440749   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:18.441300   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:18.441331   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:18.441235   52523 retry.go:31] will retry after 548.343029ms: waiting for machine to come up
	I0907 01:12:18.990710   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:18.991176   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:18.991208   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:18.991126   52523 retry.go:31] will retry after 658.109009ms: waiting for machine to come up
	I0907 01:12:19.650700   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:19.651212   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:19.651243   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:19.651152   52523 retry.go:31] will retry after 901.378329ms: waiting for machine to come up
	I0907 01:12:20.554224   52488 main.go:141] libmachine: (newest-cni-294457) DBG | domain newest-cni-294457 has defined MAC address 52:54:00:eb:20:af in network mk-newest-cni-294457
	I0907 01:12:20.554757   52488 main.go:141] libmachine: (newest-cni-294457) DBG | unable to find current IP address of domain newest-cni-294457 in network mk-newest-cni-294457
	I0907 01:12:20.554793   52488 main.go:141] libmachine: (newest-cni-294457) DBG | I0907 01:12:20.554702   52523 retry.go:31] will retry after 805.242137ms: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:50:42 UTC, ends at Thu 2023-09-07 01:12:25 UTC. --
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.042247149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a3e27ee7-d80f-4bb0-b684-045d7049451c name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.399698953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27d46e2e-4a0b-4acd-9c9e-cfa1062239d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.399819993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27d46e2e-4a0b-4acd-9c9e-cfa1062239d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.400072536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27d46e2e-4a0b-4acd-9c9e-cfa1062239d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.445401172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=876afb66-cb91-4cbd-bc99-000ce80a1d48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.445528462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=876afb66-cb91-4cbd-bc99-000ce80a1d48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.445854397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=876afb66-cb91-4cbd-bc99-000ce80a1d48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.489822579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59211fa6-ee0e-4cef-980f-c81f95cac4e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.489907110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59211fa6-ee0e-4cef-980f-c81f95cac4e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.490091422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59211fa6-ee0e-4cef-980f-c81f95cac4e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.530519393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=abd1b24f-565e-4263-b2bc-620116d10d91 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.530710217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=abd1b24f-565e-4263-b2bc-620116d10d91 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.531180048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=abd1b24f-565e-4263-b2bc-620116d10d91 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.568945190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f81c8960-40a6-4706-9896-be2896d30f80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.569035842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f81c8960-40a6-4706-9896-be2896d30f80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.569218144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f81c8960-40a6-4706-9896-be2896d30f80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.605697904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=74eeb74b-a489-4c72-aac2-10827d5dd0e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.605780571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=74eeb74b-a489-4c72-aac2-10827d5dd0e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.605991670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=74eeb74b-a489-4c72-aac2-10827d5dd0e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.644337384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=628064b7-9ee9-4e3c-9c1c-90841583cd24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.644454518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=628064b7-9ee9-4e3c-9c1c-90841583cd24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.644892020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=628064b7-9ee9-4e3c-9c1c-90841583cd24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.676760186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f22a6d6f-7297-424d-a136-224a24a93433 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.676821180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f22a6d6f-7297-424d-a136-224a24a93433 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:12:25 no-preload-321164 crio[712]: time="2023-09-07 01:12:25.676975682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb,PodSandboxId:fb40ca822771b3230937d55b30241286284089d069e64283770173d71e315ee3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694048204715017798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58bbe692-61d0-466d-b6bf-28af2faf4ec9,},Annotations:map[string]string{io.kubernetes.container.hash: deb117c1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851,PodSandboxId:1793a0b969a05031d95c008807583f3d7f416d2b5ed233c15219c91266309520,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694048204100909433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8tnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d896961-1b2c-48fd-b9dd-a40a95174fed,},Annotations:map[string]string{io.kubernetes.container.hash: b1a2e0a9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a,PodSandboxId:d7a01515c0f425c77db141ef09ceeb7aa237a8d3953687967f7cf4f65e9ae185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694048202158171513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-st6n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8f3aa3f2-223b-43de-b0e9-987958c50108,},Annotations:map[string]string{io.kubernetes.container.hash: bf4a60f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9,PodSandboxId:20b9f6105004e1d3a6844781996269673e28575bc1a369c0058c4817d4f90fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694048180412219869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb8f8325d2d60035b36fc55a8010f85,},Anno
tations:map[string]string{io.kubernetes.container.hash: 295e44f3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8,PodSandboxId:31c952c77569963b2754e3eef86aa7461e20350ad8540c05b5fc2033821ea21a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694048180299624728,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1378cd55d1f5e229ba062a16000fcd7,},Annotations:map
[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d,PodSandboxId:cb5064eb26ab588981f3697df795a3fa9b87797ad85441a25fe45c627741776a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694048180148331182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9358de5f83650971dcebe7225
9ed1da6,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a,PodSandboxId:a886ce3866e94997c76c80a35dd24eb1ba0ddb12fb3214614c09bc8ac162717a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694048179858775886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-321164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04a455265075b7d6a9513e1de08f615,},A
nnotations:map[string]string{io.kubernetes.container.hash: c6032eca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f22a6d6f-7297-424d-a136-224a24a93433 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f22e91d0ce8dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   fb40ca822771b
	8e0183b73848b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   1793a0b969a05
	51811962596db       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   15 minutes ago      Running             kube-proxy                0                   d7a01515c0f42
	0ea3466fd42e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   20b9f6105004e
	4404f1dd3fac9       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   16 minutes ago      Running             kube-scheduler            2                   31c952c775699
	731ac2001421f       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   16 minutes ago      Running             kube-controller-manager   2                   cb5064eb26ab5
	785b52b71f61b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   16 minutes ago      Running             kube-apiserver            2                   a886ce3866e94
	
	* 
	* ==> coredns [8e0183b73848b80310ea93bb1abcb697af8c5a5ace8510619fb5ffb3150d3851] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44258 - 22988 "HINFO IN 890977412813668942.3486118729297504883. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009767887s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-321164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-321164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=no-preload-321164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:56:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-321164
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Sep 2023 01:12:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:12:06 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:12:06 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:12:06 +0000   Thu, 07 Sep 2023 00:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:12:06 +0000   Thu, 07 Sep 2023 00:56:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.125
	  Hostname:    no-preload-321164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 509d986e8a774ffdb920ce8b89b0ab68
	  System UUID:                509d986e-8a77-4ffd-b920-ce8b89b0ab68
	  Boot ID:                    a61452df-4bbd-4620-855f-33e6e4674737
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-8tnp7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-321164                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-321164             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-321164    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-st6n8                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-321164             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-vgngs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node no-preload-321164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node no-preload-321164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node no-preload-321164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-321164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-321164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-321164 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node no-preload-321164 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeReady                15m                kubelet          Node no-preload-321164 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-321164 event: Registered Node no-preload-321164 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.281912] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.350391] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139238] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.395196] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.092536] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.108494] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.142096] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.118733] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.227690] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Sep 7 00:51] systemd-fstab-generator[1219]: Ignoring "noauto" for root device
	[ +10.942270] hrtimer: interrupt took 3413367 ns
	[  +8.689484] kauditd_printk_skb: 29 callbacks suppressed
	[Sep 7 00:56] systemd-fstab-generator[3841]: Ignoring "noauto" for root device
	[  +9.763798] systemd-fstab-generator[4174]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [0ea3466fd42e9bdedcefa23032add2a63a58170f64a4aa7336223ace0d0df8a9] <==
	* {"level":"info","ts":"2023-09-07T00:56:23.208803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-07T00:56:23.208888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 received MsgPreVoteResp from f95bb4e8498c60d4 at term 1"}
	{"level":"info","ts":"2023-09-07T00:56:23.208949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became candidate at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.208994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 received MsgVoteResp from f95bb4e8498c60d4 at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.209026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f95bb4e8498c60d4 became leader at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.209052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f95bb4e8498c60d4 elected leader f95bb4e8498c60d4 at term 2"}
	{"level":"info","ts":"2023-09-07T00:56:23.211926Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f95bb4e8498c60d4","local-member-attributes":"{Name:no-preload-321164 ClientURLs:[https://192.168.61.125:2379]}","request-path":"/0/members/f95bb4e8498c60d4/attributes","cluster-id":"1fb33b3a6db0430d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-07T00:56:23.212169Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:56:23.212731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-07T00:56:23.214048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.125:2379"}
	{"level":"info","ts":"2023-09-07T00:56:23.214254Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.214353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-07T00:56:23.215918Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1fb33b3a6db0430d","local-member-id":"f95bb4e8498c60d4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216074Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-07T00:56:23.216376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-07T00:56:23.216392Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-07T01:06:23.26024Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2023-09-07T01:06:23.264344Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":712,"took":"2.631303ms","hash":1156960868}
	{"level":"info","ts":"2023-09-07T01:06:23.264526Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1156960868,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2023-09-07T01:11:23.268109Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2023-09-07T01:11:23.26925Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":955,"took":"640.316µs","hash":1866969177}
	{"level":"info","ts":"2023-09-07T01:11:23.269378Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1866969177,"revision":955,"compact-revision":712}
	{"level":"info","ts":"2023-09-07T01:11:34.730145Z","caller":"traceutil/trace.go:171","msg":"trace[1647268178] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"496.157688ms","start":"2023-09-07T01:11:34.233962Z","end":"2023-09-07T01:11:34.73012Z","steps":["trace[1647268178] 'process raft request'  (duration: 495.888241ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-07T01:11:34.73181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-07T01:11:34.233942Z","time spent":"496.318632ms","remote":"127.0.0.1:37786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1206 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  01:12:26 up 21 min,  0 users,  load average: 0.12, 0.17, 0.21
	Linux no-preload-321164 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [785b52b71f61b6055d1517d7017f7e79cf1aba484ac30e131b7ce2b86235663a] <==
	* I0907 01:11:24.788673       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:11:24.788707       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:11:24.942820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:24.943024       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:24.943664       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:11:24.943730       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:11:25.943379       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:25.943436       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:11:25.943444       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:11:25.943649       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:11:25.943901       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:11:25.945294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:11:34.732735       1 trace.go:236] Trace[1798476713]: "Update" accept:application/json, */*,audit-id:cd8a0dda-31da-4ddd-bf43-91a3fbec7e9b,client:192.168.61.125,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-Sep-2023 01:11:34.231) (total time: 500ms):
	Trace[1798476713]: [500.739283ms] [500.739283ms] END
	I0907 01:12:24.789038       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.85.94:443: connect: connection refused
	I0907 01:12:24.789155       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0907 01:12:25.943630       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:25.943658       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0907 01:12:25.943677       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0907 01:12:25.948246       1 handler_proxy.go:93] no RequestInfo found in the context
	E0907 01:12:25.948380       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:12:25.948391       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [731ac2001421f2caf3a0395a03f29c69ab105e0d0321ae9edfc6af19e6eaac9d] <==
	* I0907 01:06:40.696771       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:07:10.176778       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:07:10.707176       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:07:40.184149       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:07:40.717725       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0907 01:07:43.684140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="177.299µs"
	I0907 01:07:57.690726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.555902ms"
	E0907 01:08:10.190225       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:08:10.726231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:08:40.196659       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:08:40.735037       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:10.202270       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:10.744374       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:09:40.210392       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:09:40.753149       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:10.216967       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:10.762475       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:10:40.222309       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:10:40.771895       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:10.227456       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:10.782986       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:11:40.235227       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:11:40.793672       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0907 01:12:10.240954       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0907 01:12:10.804098       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [51811962596db2c5626f20321d48f4000171f3324cf4acb028b4a1c5c613c33a] <==
	* I0907 00:56:43.717940       1 server_others.go:69] "Using iptables proxy"
	I0907 00:56:43.732393       1 node.go:141] Successfully retrieved node IP: 192.168.61.125
	I0907 00:56:43.884659       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0907 00:56:43.884712       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:56:43.887218       1 server_others.go:152] "Using iptables Proxier"
	I0907 00:56:43.887284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0907 00:56:43.887488       1 server.go:846] "Version info" version="v1.28.1"
	I0907 00:56:43.887498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:56:43.889388       1 config.go:188] "Starting service config controller"
	I0907 00:56:43.889419       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0907 00:56:43.889445       1 config.go:97] "Starting endpoint slice config controller"
	I0907 00:56:43.889449       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0907 00:56:43.896784       1 config.go:315] "Starting node config controller"
	I0907 00:56:43.896804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0907 00:56:43.990637       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0907 00:56:43.990753       1 shared_informer.go:318] Caches are synced for service config
	I0907 00:56:43.997207       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4404f1dd3fac9240a94a5ba8a5e1a7684668f834217415cda8ea5a36d77381d8] <==
	* W0907 00:56:24.993880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:24.994017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:24.998801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:56:24.998848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0907 00:56:25.871000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:56:25.871054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0907 00:56:25.958438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:25.958523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:26.135117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:56:26.135254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0907 00:56:26.190223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0907 00:56:26.190324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0907 00:56:26.226201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:56:26.226309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0907 00:56:26.272869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0907 00:56:26.272962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0907 00:56:26.276524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:56:26.276645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0907 00:56:26.288331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:56:26.288384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0907 00:56:26.299868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:56:26.299892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0907 00:56:26.513038       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0907 00:56:26.513157       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0907 00:56:28.469948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:50:42 UTC, ends at Thu 2023-09-07 01:12:26 UTC. --
	Sep 07 01:09:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:09:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:09:35 no-preload-321164 kubelet[4181]: E0907 01:09:35.663249    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:09:47 no-preload-321164 kubelet[4181]: E0907 01:09:47.662875    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:09:58 no-preload-321164 kubelet[4181]: E0907 01:09:58.663380    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:10:10 no-preload-321164 kubelet[4181]: E0907 01:10:10.662198    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:10:23 no-preload-321164 kubelet[4181]: E0907 01:10:23.662396    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:10:28 no-preload-321164 kubelet[4181]: E0907 01:10:28.702640    4181 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:10:28 no-preload-321164 kubelet[4181]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:10:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:10:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:10:38 no-preload-321164 kubelet[4181]: E0907 01:10:38.663345    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:10:50 no-preload-321164 kubelet[4181]: E0907 01:10:50.666496    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:11:01 no-preload-321164 kubelet[4181]: E0907 01:11:01.663395    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:11:13 no-preload-321164 kubelet[4181]: E0907 01:11:13.663087    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:11:27 no-preload-321164 kubelet[4181]: E0907 01:11:27.663494    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:11:28 no-preload-321164 kubelet[4181]: E0907 01:11:28.703342    4181 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 07 01:11:28 no-preload-321164 kubelet[4181]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 07 01:11:28 no-preload-321164 kubelet[4181]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 07 01:11:28 no-preload-321164 kubelet[4181]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 07 01:11:28 no-preload-321164 kubelet[4181]: E0907 01:11:28.706742    4181 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 07 01:11:41 no-preload-321164 kubelet[4181]: E0907 01:11:41.662878    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:11:54 no-preload-321164 kubelet[4181]: E0907 01:11:54.664028    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:12:08 no-preload-321164 kubelet[4181]: E0907 01:12:08.663736    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	Sep 07 01:12:21 no-preload-321164 kubelet[4181]: E0907 01:12:21.663432    4181 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgngs" podUID="9036423c-c4f7-4beb-92da-e106b8af306c"
	
	* 
	* ==> storage-provisioner [f22e91d0ce8ddfc2ae870e321a47a122680b5d8f0e2cc8a2305481e1085489fb] <==
	* I0907 00:56:44.931286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:56:44.948081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:56:44.948194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:56:44.956926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:56:44.957110       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177!
	I0907 00:56:44.961079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8683b3a-8d26-42a1-bae4-5d58eae1aa63", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177 became leader
	I0907 00:56:45.057961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-321164_8734501e-e66a-4197-984a-84457f7fa177!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-321164 -n no-preload-321164
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-321164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vgngs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs: exit status 1 (81.616869ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vgngs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-321164 describe pod metrics-server-57f55c9bc5-vgngs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (396.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (164.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0907 01:09:02.118079   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-940806 -n old-k8s-version-940806
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-07 01:10:56.045019468 +0000 UTC m=+5594.766475012
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-940806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-940806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.72µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-940806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-940806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-940806 logs -n 25: (1.543951311s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-386196                              | cert-expiration-386196       | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049830                           | kubernetes-upgrade-049830    | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:42 UTC |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:42 UTC | 07 Sep 23 00:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-940806        | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC | 07 Sep 23 00:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-321164             | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-546209            | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-690155                              | stopped-upgrade-690155       | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	| delete  | -p                                                     | disable-driver-mounts-488051 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:44 UTC |
	|         | disable-driver-mounts-488051                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:44 UTC | 07 Sep 23 00:45 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-940806             | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-940806                              | old-k8s-version-940806       | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-773466  | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC | 07 Sep 23 00:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:45 UTC |                     |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-321164                  | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-546209                 | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-321164                                   | no-preload-321164            | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-546209                                  | embed-certs-546209           | jenkins | v1.31.2 | 07 Sep 23 00:46 UTC | 07 Sep 23 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-773466       | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-773466 | jenkins | v1.31.2 | 07 Sep 23 00:48 UTC | 07 Sep 23 00:56 UTC |
	|         | default-k8s-diff-port-773466                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/07 00:48:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:48:30.668905   47297 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:48:30.669040   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669051   47297 out.go:309] Setting ErrFile to fd 2...
	I0907 00:48:30.669055   47297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:48:30.669275   47297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:48:30.669849   47297 out.go:303] Setting JSON to false
	I0907 00:48:30.670802   47297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1694042256,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:48:30.670876   47297 start.go:138] virtualization: kvm guest
	I0907 00:48:30.673226   47297 out.go:177] * [default-k8s-diff-port-773466] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:48:30.675018   47297 notify.go:220] Checking for updates...
	I0907 00:48:30.675022   47297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:48:30.676573   47297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:48:30.677899   47297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:48:30.679390   47297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:48:30.680678   47297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:48:30.682324   47297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:48:30.684199   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:48:30.684737   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.684791   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.699093   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0907 00:48:30.699446   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.699961   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.699981   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.700356   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.700531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.700779   47297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:48:30.701065   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:48:30.701099   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:48:30.715031   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I0907 00:48:30.715374   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:48:30.715847   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:48:30.715866   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:48:30.716151   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:48:30.716316   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:48:30.750129   47297 out.go:177] * Using the kvm2 driver based on existing profile
	I0907 00:48:30.751568   47297 start.go:298] selected driver: kvm2
	I0907 00:48:30.751584   47297 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.751680   47297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:48:30.752362   47297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.752458   47297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:48:30.765932   47297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0907 00:48:30.766254   47297 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:48:30.766285   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:48:30.766297   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:48:30.766312   47297 start_flags.go:321] config:
	{Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-77346
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:48:30.766449   47297 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:48:30.768165   47297 out.go:177] * Starting control plane node default-k8s-diff-port-773466 in cluster default-k8s-diff-port-773466
	I0907 00:48:28.807066   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:30.769579   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:48:30.769605   47297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0907 00:48:30.769618   47297 cache.go:57] Caching tarball of preloaded images
	I0907 00:48:30.769690   47297 preload.go:174] Found /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:48:30.769700   47297 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0907 00:48:30.769802   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:48:30.769965   47297 start.go:365] acquiring machines lock for default-k8s-diff-port-773466: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:48:34.886988   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:37.959093   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:44.039083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:47.111100   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:53.191104   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:48:56.263090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:02.343026   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:05.415059   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:11.495064   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:14.567091   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:20.647045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:23.719041   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:29.799012   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:32.871070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:38.951073   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:42.023127   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:48.103090   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:51.175063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:49:57.255062   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:00.327063   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:06.407045   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:09.479083   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:15.559056   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:18.631050   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:24.711070   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:27.783032   46354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.245:22: connect: no route to host
	I0907 00:50:30.786864   46768 start.go:369] acquired machines lock for "no-preload-321164" in 3m55.470116528s
	I0907 00:50:30.786911   46768 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:30.786932   46768 fix.go:54] fixHost starting: 
	I0907 00:50:30.787365   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:30.787402   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:30.802096   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0907 00:50:30.802471   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:30.803040   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:50:30.803070   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:30.803390   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:30.803609   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:30.803735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:50:30.805366   46768 fix.go:102] recreateIfNeeded on no-preload-321164: state=Stopped err=<nil>
	I0907 00:50:30.805394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	W0907 00:50:30.805601   46768 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:30.807478   46768 out.go:177] * Restarting existing kvm2 VM for "no-preload-321164" ...
	I0907 00:50:30.784621   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:30.784665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:50:30.786659   46354 machine.go:91] provisioned docker machine in 4m37.428246924s
	I0907 00:50:30.786707   46354 fix.go:56] fixHost completed within 4m37.448613342s
	I0907 00:50:30.786715   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 4m37.448629588s
	W0907 00:50:30.786743   46354 start.go:672] error starting host: provision: host is not running
	W0907 00:50:30.786862   46354 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0907 00:50:30.786876   46354 start.go:687] Will try again in 5 seconds ...
	I0907 00:50:30.809015   46768 main.go:141] libmachine: (no-preload-321164) Calling .Start
	I0907 00:50:30.809182   46768 main.go:141] libmachine: (no-preload-321164) Ensuring networks are active...
	I0907 00:50:30.809827   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network default is active
	I0907 00:50:30.810153   46768 main.go:141] libmachine: (no-preload-321164) Ensuring network mk-no-preload-321164 is active
	I0907 00:50:30.810520   46768 main.go:141] libmachine: (no-preload-321164) Getting domain xml...
	I0907 00:50:30.811434   46768 main.go:141] libmachine: (no-preload-321164) Creating domain...
	I0907 00:50:32.024103   46768 main.go:141] libmachine: (no-preload-321164) Waiting to get IP...
	I0907 00:50:32.024955   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.025314   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.025386   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.025302   47622 retry.go:31] will retry after 211.413529ms: waiting for machine to come up
	I0907 00:50:32.238887   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.239424   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.239452   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.239400   47622 retry.go:31] will retry after 306.62834ms: waiting for machine to come up
	I0907 00:50:32.547910   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.548378   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.548409   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.548318   47622 retry.go:31] will retry after 360.126343ms: waiting for machine to come up
	I0907 00:50:32.909809   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:32.910325   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:32.910356   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:32.910259   47622 retry.go:31] will retry after 609.953186ms: waiting for machine to come up
	I0907 00:50:33.522073   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:33.522437   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:33.522467   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:33.522382   47622 retry.go:31] will retry after 526.4152ms: waiting for machine to come up
	I0907 00:50:34.050028   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.050475   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.050503   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.050417   47622 retry.go:31] will retry after 748.311946ms: waiting for machine to come up
	I0907 00:50:34.799933   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:34.800367   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:34.800395   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:34.800321   47622 retry.go:31] will retry after 732.484316ms: waiting for machine to come up
	I0907 00:50:35.788945   46354 start.go:365] acquiring machines lock for old-k8s-version-940806: {Name:mk379e486bb4fb3fa27c69f9ddbab984319ece0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:50:35.534154   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:35.534583   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:35.534606   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:35.534535   47622 retry.go:31] will retry after 1.217693919s: waiting for machine to come up
	I0907 00:50:36.754260   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:36.754682   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:36.754711   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:36.754634   47622 retry.go:31] will retry after 1.508287783s: waiting for machine to come up
	I0907 00:50:38.264195   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:38.264607   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:38.264630   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:38.264557   47622 retry.go:31] will retry after 1.481448978s: waiting for machine to come up
	I0907 00:50:39.748383   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:39.748865   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:39.748898   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:39.748803   47622 retry.go:31] will retry after 2.345045055s: waiting for machine to come up
	I0907 00:50:42.095158   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:42.095801   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:42.095832   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:42.095747   47622 retry.go:31] will retry after 3.269083195s: waiting for machine to come up
	I0907 00:50:45.369097   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:45.369534   46768 main.go:141] libmachine: (no-preload-321164) DBG | unable to find current IP address of domain no-preload-321164 in network mk-no-preload-321164
	I0907 00:50:45.369561   46768 main.go:141] libmachine: (no-preload-321164) DBG | I0907 00:50:45.369448   47622 retry.go:31] will retry after 4.462134893s: waiting for machine to come up
	I0907 00:50:49.835862   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836273   46768 main.go:141] libmachine: (no-preload-321164) Found IP for machine: 192.168.61.125
	I0907 00:50:49.836315   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has current primary IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.836342   46768 main.go:141] libmachine: (no-preload-321164) Reserving static IP address...
	I0907 00:50:49.836774   46768 main.go:141] libmachine: (no-preload-321164) Reserved static IP address: 192.168.61.125
	I0907 00:50:49.836794   46768 main.go:141] libmachine: (no-preload-321164) Waiting for SSH to be available...
	I0907 00:50:49.836827   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.836860   46768 main.go:141] libmachine: (no-preload-321164) DBG | skip adding static IP to network mk-no-preload-321164 - found existing host DHCP lease matching {name: "no-preload-321164", mac: "52:54:00:eb:da:68", ip: "192.168.61.125"}
	I0907 00:50:49.836880   46768 main.go:141] libmachine: (no-preload-321164) DBG | Getting to WaitForSSH function...
	I0907 00:50:49.838931   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839299   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.839326   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.839464   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH client type: external
	I0907 00:50:49.839500   46768 main.go:141] libmachine: (no-preload-321164) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa (-rw-------)
	I0907 00:50:49.839538   46768 main.go:141] libmachine: (no-preload-321164) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:50:49.839557   46768 main.go:141] libmachine: (no-preload-321164) DBG | About to run SSH command:
	I0907 00:50:49.839568   46768 main.go:141] libmachine: (no-preload-321164) DBG | exit 0
	I0907 00:50:49.930557   46768 main.go:141] libmachine: (no-preload-321164) DBG | SSH cmd err, output: <nil>: 
	I0907 00:50:49.931033   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetConfigRaw
	I0907 00:50:49.931662   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:49.934286   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934719   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.934755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.934973   46768 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/config.json ...
	I0907 00:50:49.935197   46768 machine.go:88] provisioning docker machine ...
	I0907 00:50:49.935221   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:49.935409   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935567   46768 buildroot.go:166] provisioning hostname "no-preload-321164"
	I0907 00:50:49.935586   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:49.935730   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:49.937619   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.937879   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:49.937899   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:49.938049   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:49.938303   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938464   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:49.938624   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:49.938803   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:49.939300   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:49.939315   46768 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-321164 && echo "no-preload-321164" | sudo tee /etc/hostname
	I0907 00:50:50.076488   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-321164
	
	I0907 00:50:50.076513   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.079041   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079362   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.079409   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.079614   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.079831   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080013   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.080183   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.080361   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.080757   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.080775   46768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-321164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-321164/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-321164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:50:51.203755   46833 start.go:369] acquired machines lock for "embed-certs-546209" in 4m11.274622402s
	I0907 00:50:51.203804   46833 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:50:51.203823   46833 fix.go:54] fixHost starting: 
	I0907 00:50:51.204233   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:50:51.204274   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:50:51.221096   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0907 00:50:51.221487   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:50:51.222026   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:50:51.222048   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:50:51.222401   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:50:51.222595   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:50:51.222757   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:50:51.224388   46833 fix.go:102] recreateIfNeeded on embed-certs-546209: state=Stopped err=<nil>
	I0907 00:50:51.224413   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	W0907 00:50:51.224585   46833 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:50:51.226812   46833 out.go:177] * Restarting existing kvm2 VM for "embed-certs-546209" ...
	I0907 00:50:50.214796   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:50:50.215590   46768 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:50:50.215629   46768 buildroot.go:174] setting up certificates
	I0907 00:50:50.215639   46768 provision.go:83] configureAuth start
	I0907 00:50:50.215659   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetMachineName
	I0907 00:50:50.215952   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:50.218581   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.218947   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.218970   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.219137   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.221833   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222177   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.222221   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.222323   46768 provision.go:138] copyHostCerts
	I0907 00:50:50.222377   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:50:50.222390   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:50:50.222497   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:50:50.222628   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:50:50.222646   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:50:50.222682   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:50:50.222765   46768 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:50:50.222784   46768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:50:50.222817   46768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:50:50.222880   46768 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.no-preload-321164 san=[192.168.61.125 192.168.61.125 localhost 127.0.0.1 minikube no-preload-321164]
	I0907 00:50:50.456122   46768 provision.go:172] copyRemoteCerts
	I0907 00:50:50.456175   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:50:50.456198   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.458665   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459030   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.459053   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.459237   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.459468   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.459630   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.459766   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:50.549146   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:50:50.572002   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0907 00:50:50.595576   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:50:50.618054   46768 provision.go:86] duration metric: configureAuth took 402.401011ms
	I0907 00:50:50.618086   46768 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:50:50.618327   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:50:50.618410   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.620908   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621255   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.621289   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.621432   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.621619   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621752   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.621879   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.622006   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:50.622586   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:50.622611   46768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:50:50.946938   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:50:50.946964   46768 machine.go:91] provisioned docker machine in 1.011750962s
	I0907 00:50:50.946975   46768 start.go:300] post-start starting for "no-preload-321164" (driver="kvm2")
	I0907 00:50:50.946989   46768 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:50:50.947015   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:50.947339   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:50:50.947367   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:50.950370   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950754   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:50.950798   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:50.950909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:50.951171   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:50.951331   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:50.951472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.040440   46768 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:50:51.044700   46768 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:50:51.044728   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:50:51.044816   46768 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:50:51.044899   46768 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:50:51.045018   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:50:51.053507   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:50:51.077125   46768 start.go:303] post-start completed in 130.134337ms
	I0907 00:50:51.077149   46768 fix.go:56] fixHost completed within 20.29021748s
	I0907 00:50:51.077174   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.079928   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080266   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.080297   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.080516   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.080744   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.080909   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.081080   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.081255   46768 main.go:141] libmachine: Using SSH client type: native
	I0907 00:50:51.081837   46768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.125 22 <nil> <nil>}
	I0907 00:50:51.081853   46768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:50:51.203596   46768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047851.182131777
	
	I0907 00:50:51.203636   46768 fix.go:206] guest clock: 1694047851.182131777
	I0907 00:50:51.203646   46768 fix.go:219] Guest: 2023-09-07 00:50:51.182131777 +0000 UTC Remote: 2023-09-07 00:50:51.077154021 +0000 UTC m=+255.896364351 (delta=104.977756ms)
	I0907 00:50:51.203664   46768 fix.go:190] guest clock delta is within tolerance: 104.977756ms
	I0907 00:50:51.203668   46768 start.go:83] releasing machines lock for "no-preload-321164", held for 20.416782491s
	I0907 00:50:51.203696   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.203977   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:51.207262   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207708   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.207755   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.207926   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208394   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208563   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:50:51.208644   46768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:50:51.208692   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.208755   46768 ssh_runner.go:195] Run: cat /version.json
	I0907 00:50:51.208777   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:50:51.211412   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211453   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211863   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211901   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.211931   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:51.211957   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:51.212132   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:50:51.212318   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212406   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:50:51.212477   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212612   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.212722   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:50:51.212875   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:50:51.300796   46768 ssh_runner.go:195] Run: systemctl --version
	I0907 00:50:51.324903   46768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:50:51.465767   46768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:50:51.471951   46768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:50:51.472036   46768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:50:51.488733   46768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:50:51.488761   46768 start.go:466] detecting cgroup driver to use...
	I0907 00:50:51.488831   46768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:50:51.501772   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:50:51.516019   46768 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:50:51.516083   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:50:51.530425   46768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:50:51.546243   46768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:50:51.649058   46768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:50:51.768622   46768 docker.go:212] disabling docker service ...
	I0907 00:50:51.768705   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:50:51.785225   46768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:50:51.797018   46768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:50:51.908179   46768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:50:52.021212   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:50:52.037034   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:50:52.055163   46768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:50:52.055218   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.065451   46768 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:50:52.065520   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.076202   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.086865   46768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:50:52.096978   46768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:50:52.107492   46768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:50:52.117036   46768 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:50:52.117104   46768 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:50:52.130309   46768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:50:52.140016   46768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:50:52.249901   46768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:50:52.422851   46768 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:50:52.422928   46768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:50:52.427852   46768 start.go:534] Will wait 60s for crictl version
	I0907 00:50:52.427903   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.431904   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:50:52.472552   46768 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:50:52.472632   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.526514   46768 ssh_runner.go:195] Run: crio --version
	I0907 00:50:52.580133   46768 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:50:51.228316   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Start
	I0907 00:50:51.228549   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring networks are active...
	I0907 00:50:51.229311   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network default is active
	I0907 00:50:51.229587   46833 main.go:141] libmachine: (embed-certs-546209) Ensuring network mk-embed-certs-546209 is active
	I0907 00:50:51.230001   46833 main.go:141] libmachine: (embed-certs-546209) Getting domain xml...
	I0907 00:50:51.230861   46833 main.go:141] libmachine: (embed-certs-546209) Creating domain...
	I0907 00:50:52.512329   46833 main.go:141] libmachine: (embed-certs-546209) Waiting to get IP...
	I0907 00:50:52.513160   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.513607   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.513709   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.513575   47738 retry.go:31] will retry after 266.575501ms: waiting for machine to come up
	I0907 00:50:52.782236   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:52.782674   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:52.782699   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:52.782623   47738 retry.go:31] will retry after 258.252832ms: waiting for machine to come up
	I0907 00:50:53.042276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.042851   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.042886   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.042799   47738 retry.go:31] will retry after 480.751908ms: waiting for machine to come up
	I0907 00:50:53.525651   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:53.526280   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:53.526314   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:53.526222   47738 retry.go:31] will retry after 592.373194ms: waiting for machine to come up
	I0907 00:50:54.119935   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.120401   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.120440   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.120320   47738 retry.go:31] will retry after 602.269782ms: waiting for machine to come up
	I0907 00:50:54.723919   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:54.724403   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:54.724429   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:54.724356   47738 retry.go:31] will retry after 631.28427ms: waiting for machine to come up
	I0907 00:50:52.581522   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetIP
	I0907 00:50:52.584587   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.584995   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:50:52.585027   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:50:52.585212   46768 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:50:52.589138   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:50:52.602205   46768 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:50:52.602259   46768 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:50:52.633785   46768 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:50:52.633808   46768 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:50:52.633868   46768 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.633887   46768 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.633889   46768 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.633929   46768 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0907 00:50:52.633954   46768 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.633849   46768 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.633937   46768 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.634076   46768 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635447   46768 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.635516   46768 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.635529   46768 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.635477   46768 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.635578   46768 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.635583   46768 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0907 00:50:52.635587   46768 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:52.868791   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917664   46768 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0907 00:50:52.917705   46768 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.917740   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:52.921520   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0907 00:50:52.924174   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:52.924775   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0907 00:50:52.926455   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:52.927265   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:52.936511   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:52.936550   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:52.989863   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0907 00:50:52.989967   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.081783   46768 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0907 00:50:53.081828   46768 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.081876   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.200951   46768 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0907 00:50:53.200999   46768 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.201037   46768 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0907 00:50:53.201055   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201074   46768 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.201115   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201120   46768 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0907 00:50:53.201138   46768 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.201163   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201196   46768 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0907 00:50:53.201208   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0907 00:50:53.201220   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201222   46768 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:53.201245   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0907 00:50:53.201254   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:53.201257   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0907 00:50:53.213879   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0907 00:50:53.213909   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0907 00:50:53.214030   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0907 00:50:53.559290   46768 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.356797   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:55.357248   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:55.357276   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:55.357208   47738 retry.go:31] will retry after 957.470134ms: waiting for machine to come up
	I0907 00:50:56.316920   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:56.317410   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:56.317437   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:56.317357   47738 retry.go:31] will retry after 929.647798ms: waiting for machine to come up
	I0907 00:50:57.249114   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:57.249599   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:57.249631   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:57.249548   47738 retry.go:31] will retry after 1.218276188s: waiting for machine to come up
	I0907 00:50:58.470046   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:50:58.470509   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:50:58.470539   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:50:58.470461   47738 retry.go:31] will retry after 2.324175972s: waiting for machine to come up
	I0907 00:50:55.219723   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.018454399s)
	I0907 00:50:55.219753   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0907 00:50:55.219835   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0: (2.018563387s)
	I0907 00:50:55.219874   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0907 00:50:55.219897   46768 ssh_runner.go:235] Completed: which crictl: (2.01861063s)
	I0907 00:50:55.219931   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1: (2.006023749s)
	I0907 00:50:55.219956   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0907 00:50:55.219965   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0907 00:50:55.219974   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:55.220018   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.220026   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1: (2.006085999s)
	I0907 00:50:55.220034   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1: (2.005987599s)
	I0907 00:50:55.220056   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0907 00:50:55.220062   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0907 00:50:55.220065   46768 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.660750078s)
	I0907 00:50:55.220091   46768 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0907 00:50:55.220107   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:50:55.220139   46768 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.220178   46768 ssh_runner.go:195] Run: which crictl
	I0907 00:50:55.220141   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:50:55.263187   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0907 00:50:55.263256   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0907 00:50:55.263276   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263282   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0907 00:50:55.263291   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:50:55.263321   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0907 00:50:55.263334   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0907 00:50:55.263428   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0907 00:50:55.263432   46768 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:50:55.275710   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0907 00:50:58.251089   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.987744073s)
	I0907 00:50:58.251119   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0907 00:50:58.251125   46768 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.987662447s)
	I0907 00:50:58.251143   46768 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251164   46768 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0907 00:50:58.251192   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0907 00:50:58.251253   46768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:50:58.256733   46768 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0907 00:51:00.798145   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:00.798673   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:00.798702   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:00.798607   47738 retry.go:31] will retry after 1.874271621s: waiting for machine to come up
	I0907 00:51:02.674532   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:02.675085   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:02.675117   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:02.675050   47738 retry.go:31] will retry after 2.9595889s: waiting for machine to come up
	I0907 00:51:04.952628   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.701410779s)
	I0907 00:51:04.952741   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0907 00:51:04.952801   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:04.952854   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0907 00:51:05.636309   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:05.636744   46833 main.go:141] libmachine: (embed-certs-546209) DBG | unable to find current IP address of domain embed-certs-546209 in network mk-embed-certs-546209
	I0907 00:51:05.636779   46833 main.go:141] libmachine: (embed-certs-546209) DBG | I0907 00:51:05.636694   47738 retry.go:31] will retry after 4.45645523s: waiting for machine to come up
	I0907 00:51:06.100759   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.147880303s)
	I0907 00:51:06.100786   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0907 00:51:06.100803   46768 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:06.100844   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0907 00:51:08.663694   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.56282168s)
	I0907 00:51:08.663725   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0907 00:51:08.663754   46768 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:08.663803   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0907 00:51:10.023202   46768 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.359374479s)
	I0907 00:51:10.023234   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0907 00:51:10.023276   46768 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:10.023349   46768 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0907 00:51:11.739345   47297 start.go:369] acquired machines lock for "default-k8s-diff-port-773466" in 2m40.969329009s
	I0907 00:51:11.739394   47297 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:11.739419   47297 fix.go:54] fixHost starting: 
	I0907 00:51:11.739834   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:11.739870   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:11.755796   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0907 00:51:11.756102   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:11.756564   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:51:11.756588   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:11.756875   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:11.757032   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:11.757185   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:51:11.758750   47297 fix.go:102] recreateIfNeeded on default-k8s-diff-port-773466: state=Stopped err=<nil>
	I0907 00:51:11.758772   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	W0907 00:51:11.758955   47297 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:11.761066   47297 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-773466" ...
	I0907 00:51:10.095825   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096285   46833 main.go:141] libmachine: (embed-certs-546209) Found IP for machine: 192.168.50.242
	I0907 00:51:10.096312   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has current primary IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.096321   46833 main.go:141] libmachine: (embed-certs-546209) Reserving static IP address...
	I0907 00:51:10.096706   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.096731   46833 main.go:141] libmachine: (embed-certs-546209) Reserved static IP address: 192.168.50.242
	I0907 00:51:10.096750   46833 main.go:141] libmachine: (embed-certs-546209) DBG | skip adding static IP to network mk-embed-certs-546209 - found existing host DHCP lease matching {name: "embed-certs-546209", mac: "52:54:00:96:b3:6a", ip: "192.168.50.242"}
	I0907 00:51:10.096766   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Getting to WaitForSSH function...
	I0907 00:51:10.096777   46833 main.go:141] libmachine: (embed-certs-546209) Waiting for SSH to be available...
	I0907 00:51:10.098896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099227   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.099260   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.099360   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH client type: external
	I0907 00:51:10.099382   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa (-rw-------)
	I0907 00:51:10.099412   46833 main.go:141] libmachine: (embed-certs-546209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:10.099428   46833 main.go:141] libmachine: (embed-certs-546209) DBG | About to run SSH command:
	I0907 00:51:10.099444   46833 main.go:141] libmachine: (embed-certs-546209) DBG | exit 0
	I0907 00:51:10.199038   46833 main.go:141] libmachine: (embed-certs-546209) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:10.199377   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetConfigRaw
	I0907 00:51:10.200006   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.202924   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203328   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.203352   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.203576   46833 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/config.json ...
	I0907 00:51:10.203879   46833 machine.go:88] provisioning docker machine ...
	I0907 00:51:10.203908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:10.204125   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204290   46833 buildroot.go:166] provisioning hostname "embed-certs-546209"
	I0907 00:51:10.204312   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.204489   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.206898   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207332   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.207365   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.207473   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.207643   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207791   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.207920   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.208080   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.208476   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.208496   46833 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-546209 && echo "embed-certs-546209" | sudo tee /etc/hostname
	I0907 00:51:10.356060   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-546209
	
	I0907 00:51:10.356098   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.359533   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.359867   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.359896   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.360097   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.360284   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360435   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.360629   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.360820   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:10.361504   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:10.361538   46833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-546209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-546209/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-546209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:10.503181   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:10.503211   46833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:10.503238   46833 buildroot.go:174] setting up certificates
	I0907 00:51:10.503246   46833 provision.go:83] configureAuth start
	I0907 00:51:10.503254   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetMachineName
	I0907 00:51:10.503555   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:10.506514   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.506930   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.506955   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.507150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.509772   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510081   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.510111   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.510215   46833 provision.go:138] copyHostCerts
	I0907 00:51:10.510281   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:10.510292   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:10.510345   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:10.510438   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:10.510446   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:10.510466   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:10.510552   46833 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:10.510559   46833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:10.510579   46833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:10.510638   46833 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.embed-certs-546209 san=[192.168.50.242 192.168.50.242 localhost 127.0.0.1 minikube embed-certs-546209]
	I0907 00:51:10.947044   46833 provision.go:172] copyRemoteCerts
	I0907 00:51:10.947101   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:10.947122   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:10.949879   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950221   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:10.950251   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:10.950456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:10.950660   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:10.950849   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:10.950993   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.052610   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:11.077082   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0907 00:51:11.100979   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:11.124155   46833 provision.go:86] duration metric: configureAuth took 620.900948ms
	I0907 00:51:11.124176   46833 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:11.124389   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:11.124456   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.127163   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127498   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.127536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.127813   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.128011   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128201   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.128381   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.128560   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.129185   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.129214   46833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:11.467260   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:11.467297   46833 machine.go:91] provisioned docker machine in 1.263400182s
	I0907 00:51:11.467309   46833 start.go:300] post-start starting for "embed-certs-546209" (driver="kvm2")
	I0907 00:51:11.467321   46833 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:11.467343   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.467669   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:11.467715   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.470299   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470675   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.470705   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.470846   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.471038   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.471191   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.471435   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.568708   46833 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:11.573505   46833 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:11.573533   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:11.573595   46833 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:11.573669   46833 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:11.573779   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:11.582612   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.607383   46833 start.go:303] post-start completed in 140.062214ms
	I0907 00:51:11.607400   46833 fix.go:56] fixHost completed within 20.403578781s
	I0907 00:51:11.607419   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.609882   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610233   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.610265   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.610411   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.610602   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610792   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.610972   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.611161   46833 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:11.611550   46833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0907 00:51:11.611563   46833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:11.739146   46833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047871.687486971
	
	I0907 00:51:11.739167   46833 fix.go:206] guest clock: 1694047871.687486971
	I0907 00:51:11.739176   46833 fix.go:219] Guest: 2023-09-07 00:51:11.687486971 +0000 UTC Remote: 2023-09-07 00:51:11.607403696 +0000 UTC m=+271.818672785 (delta=80.083275ms)
	I0907 00:51:11.739196   46833 fix.go:190] guest clock delta is within tolerance: 80.083275ms
	I0907 00:51:11.739202   46833 start.go:83] releasing machines lock for "embed-certs-546209", held for 20.535419293s
	I0907 00:51:11.739232   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.739478   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:11.742078   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742446   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.742474   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.742676   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743172   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743342   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:11.743422   46833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:11.743470   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.743541   46833 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:11.743573   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:11.746120   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746484   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.746516   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746536   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.746640   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.746843   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.746989   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747015   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:11.747044   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:11.747169   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.747179   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:11.747394   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:11.747556   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:11.747717   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:11.839831   46833 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:11.861736   46833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:12.006017   46833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:12.011678   46833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:12.011739   46833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:12.026851   46833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:12.026871   46833 start.go:466] detecting cgroup driver to use...
	I0907 00:51:12.026934   46833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:12.040077   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:12.052962   46833 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:12.053018   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:12.066509   46833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:12.079587   46833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:12.189043   46833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:12.310997   46833 docker.go:212] disabling docker service ...
	I0907 00:51:12.311065   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:12.324734   46833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:12.336808   46833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:12.461333   46833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:12.584841   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:12.598337   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:12.615660   46833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:12.615736   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.626161   46833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:12.626232   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.637475   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.647631   46833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:12.658444   46833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:12.669167   46833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:12.678558   46833 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:12.678614   46833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:12.692654   46833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:12.703465   46833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:12.820819   46833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:12.996574   46833 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:12.996650   46833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:13.002744   46833 start.go:534] Will wait 60s for crictl version
	I0907 00:51:13.002818   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:51:13.007287   46833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:13.042173   46833 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:13.042254   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.090562   46833 ssh_runner.go:195] Run: crio --version
	I0907 00:51:13.145112   46833 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:13.146767   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetIP
	I0907 00:51:13.149953   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150357   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:13.150388   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:13.150603   46833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:13.154792   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:13.166540   46833 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:13.166607   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:13.203316   46833 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:13.203391   46833 ssh_runner.go:195] Run: which lz4
	I0907 00:51:13.207399   46833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:13.211826   46833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:13.211854   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:10.979891   46768 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0907 00:51:10.979935   46768 cache_images.go:123] Successfully loaded all cached images
	I0907 00:51:10.979942   46768 cache_images.go:92] LoadImages completed in 18.346122768s
	I0907 00:51:10.980017   46768 ssh_runner.go:195] Run: crio config
	I0907 00:51:11.044573   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:11.044595   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:11.044612   46768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:11.044630   46768 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.125 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-321164 NodeName:no-preload-321164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:11.044749   46768 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-321164"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:11.044807   46768 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-321164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:11.044852   46768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:11.055469   46768 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:11.055527   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:11.063642   46768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0907 00:51:11.081151   46768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:11.098623   46768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0907 00:51:11.116767   46768 ssh_runner.go:195] Run: grep 192.168.61.125	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:11.120552   46768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:11.133845   46768 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164 for IP: 192.168.61.125
	I0907 00:51:11.133876   46768 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:11.134026   46768 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:11.134092   46768 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:11.134173   46768 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.key
	I0907 00:51:11.134216   46768 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key.05d6cdfc
	I0907 00:51:11.134252   46768 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key
	I0907 00:51:11.134393   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:11.134436   46768 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:11.134455   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:11.134488   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:11.134512   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:11.134534   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:11.134576   46768 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:11.135184   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:11.161212   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:11.185797   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:11.209084   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:11.233001   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:11.255646   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:11.278323   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:11.301913   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:11.324316   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:11.349950   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:11.375738   46768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:11.402735   46768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:11.421372   46768 ssh_runner.go:195] Run: openssl version
	I0907 00:51:11.426855   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:11.436392   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440778   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.440825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:11.446374   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:11.455773   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:11.465073   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470197   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.470243   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:11.475740   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:11.484993   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:11.494256   46768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498766   46768 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.498825   46768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:11.504037   46768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:11.512896   46768 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:11.517289   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:11.523115   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:11.528780   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:11.534330   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:11.539777   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:11.545439   46768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:11.550878   46768 kubeadm.go:404] StartCluster: {Name:no-preload-321164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-321164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:11.550968   46768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:11.551014   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:11.582341   46768 cri.go:89] found id: ""
	I0907 00:51:11.582409   46768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:11.591760   46768 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:11.591782   46768 kubeadm.go:636] restartCluster start
	I0907 00:51:11.591825   46768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:11.600241   46768 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.601258   46768 kubeconfig.go:92] found "no-preload-321164" server: "https://192.168.61.125:8443"
	I0907 00:51:11.603775   46768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:11.612221   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.612268   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.622330   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.622348   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:11.622392   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:11.632889   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.133626   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.133726   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.144713   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:12.633065   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:12.633145   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:12.648698   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.133304   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.133401   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.146822   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:13.633303   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:13.633374   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:13.648566   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.132966   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.133041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.147847   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:14.633090   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:14.633177   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:14.648893   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.133388   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.133465   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.149162   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:11.762623   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Start
	I0907 00:51:11.762823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring networks are active...
	I0907 00:51:11.763580   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network default is active
	I0907 00:51:11.764022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Ensuring network mk-default-k8s-diff-port-773466 is active
	I0907 00:51:11.764494   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Getting domain xml...
	I0907 00:51:11.765139   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Creating domain...
	I0907 00:51:13.032555   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting to get IP...
	I0907 00:51:13.033441   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.033934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.033855   47907 retry.go:31] will retry after 214.721735ms: waiting for machine to come up
	I0907 00:51:13.250549   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251062   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.251090   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.251001   47907 retry.go:31] will retry after 260.305773ms: waiting for machine to come up
	I0907 00:51:13.512603   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513144   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.513175   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.513088   47907 retry.go:31] will retry after 293.213959ms: waiting for machine to come up
	I0907 00:51:13.807649   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:13.808216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:13.808128   47907 retry.go:31] will retry after 455.70029ms: waiting for machine to come up
	I0907 00:51:14.265914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266412   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:14.266444   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:14.266367   47907 retry.go:31] will retry after 761.48199ms: waiting for machine to come up
	I0907 00:51:15.029446   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029916   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.029950   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.029868   47907 retry.go:31] will retry after 889.947924ms: waiting for machine to come up
	I0907 00:51:15.079606   46833 crio.go:444] Took 1.872243 seconds to copy over tarball
	I0907 00:51:15.079679   46833 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:18.068521   46833 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988813422s)
	I0907 00:51:18.068547   46833 crio.go:451] Took 2.988919 seconds to extract the tarball
	I0907 00:51:18.068557   46833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:18.109973   46833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:18.154472   46833 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:18.154493   46833 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:18.154568   46833 ssh_runner.go:195] Run: crio config
	I0907 00:51:18.216517   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:18.216549   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:18.216571   46833 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:18.216597   46833 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-546209 NodeName:embed-certs-546209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:18.216747   46833 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-546209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:18.216815   46833 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-546209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:51:18.216863   46833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:18.230093   46833 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:18.230164   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:18.239087   46833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0907 00:51:18.256683   46833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:18.274030   46833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0907 00:51:18.294711   46833 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:18.299655   46833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:18.312980   46833 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209 for IP: 192.168.50.242
	I0907 00:51:18.313028   46833 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:18.313215   46833 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:18.313283   46833 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:18.313382   46833 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/client.key
	I0907 00:51:18.313446   46833 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key.5dc0f9a1
	I0907 00:51:18.313495   46833 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key
	I0907 00:51:18.313607   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:18.313633   46833 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:18.313640   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:18.313665   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:18.313688   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:18.313709   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:18.313747   46833 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:18.314356   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:18.344731   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:51:18.368872   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:18.397110   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/embed-certs-546209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:51:18.424441   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:18.452807   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:18.481018   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:18.509317   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:18.541038   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:18.565984   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:18.590863   46833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:18.614083   46833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:18.631295   46833 ssh_runner.go:195] Run: openssl version
	I0907 00:51:18.637229   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:18.651999   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.656999   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.657052   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:18.663109   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:18.675826   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:18.688358   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693281   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.693331   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:18.699223   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:18.711511   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:18.724096   46833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729285   46833 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.729338   46833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:18.735410   46833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:18.747948   46833 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:18.753003   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:18.759519   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:18.765813   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:18.772328   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:18.778699   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:18.785207   46833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:18.791515   46833 kubeadm.go:404] StartCluster: {Name:embed-certs-546209 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-546209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:18.791636   46833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:18.791719   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:18.831468   46833 cri.go:89] found id: ""
	I0907 00:51:18.831544   46833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:18.843779   46833 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:18.843805   46833 kubeadm.go:636] restartCluster start
	I0907 00:51:18.843863   46833 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:18.854604   46833 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.855622   46833 kubeconfig.go:92] found "embed-certs-546209" server: "https://192.168.50.242:8443"
	I0907 00:51:18.857679   46833 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:18.867583   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.867640   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.879567   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.879587   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.879634   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.891098   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.391839   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.391932   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.405078   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.633045   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:15.633128   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:15.644837   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.133842   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.133926   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.148072   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:16.633750   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:16.633828   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:16.648961   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.133669   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.133757   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.148342   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:17.633967   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:17.634076   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:17.649188   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.133815   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.133917   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.148350   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:18.633962   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:18.634047   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:18.649195   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.133733   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.133821   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.145109   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:19.633727   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:19.633808   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:19.645272   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.133921   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.133990   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.145494   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:15.920914   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921395   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:15.921430   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:15.921325   47907 retry.go:31] will retry after 952.422054ms: waiting for machine to come up
	I0907 00:51:16.875800   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876319   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:16.876356   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:16.876272   47907 retry.go:31] will retry after 1.481584671s: waiting for machine to come up
	I0907 00:51:18.359815   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:18.360308   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:18.360185   47907 retry.go:31] will retry after 1.355619716s: waiting for machine to come up
	I0907 00:51:19.717081   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717458   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:19.717485   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:19.717419   47907 retry.go:31] will retry after 1.450172017s: waiting for machine to come up
	I0907 00:51:19.892019   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.038702   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.051318   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.391913   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.404956   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.891503   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.891594   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.904473   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.391486   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.391563   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.405726   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.891257   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.891337   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.905422   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.392028   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.392137   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.408621   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:22.891926   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:22.892033   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:22.906116   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.391605   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.391684   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.404834   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:23.891360   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:23.891447   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:23.908340   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:24.391916   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.392007   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.408806   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:20.633099   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:20.633200   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:20.644181   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.133144   46768 api_server.go:166] Checking apiserver status ...
	I0907 00:51:21.133227   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:21.144139   46768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:21.612786   46768 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:21.612814   46768 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:21.612826   46768 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:21.612881   46768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:21.643142   46768 cri.go:89] found id: ""
	I0907 00:51:21.643216   46768 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:21.658226   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:21.666895   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:21.666960   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675285   46768 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:21.675317   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:21.817664   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.473084   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.670341   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.752820   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:22.842789   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:22.842868   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:22.861783   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.383385   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:23.884041   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.384065   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:24.884077   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:21.168650   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169014   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:21.169037   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:21.168966   47907 retry.go:31] will retry after 2.876055316s: waiting for machine to come up
	I0907 00:51:24.046598   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.046990   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:24.047020   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:24.046937   47907 retry.go:31] will retry after 2.837607521s: waiting for machine to come up
	I0907 00:51:24.891477   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:24.891564   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:24.908102   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.391625   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.391704   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.408399   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:25.892052   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:25.892166   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:25.909608   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.391529   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.391610   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.407459   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:26.891930   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:26.891994   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:26.908217   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.391815   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.391898   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.404370   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:27.891918   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:27.892001   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:27.904988   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.391570   46833 api_server.go:166] Checking apiserver status ...
	I0907 00:51:28.391650   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:28.403968   46833 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:28.868619   46833 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:28.868666   46833 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:28.868679   46833 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:28.868736   46833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:28.907258   46833 cri.go:89] found id: ""
	I0907 00:51:28.907332   46833 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:28.926539   46833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:28.938760   46833 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:28.938837   46833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950550   46833 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:28.950576   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:29.092484   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:25.383423   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:25.413853   46768 api_server.go:72] duration metric: took 2.571070768s to wait for apiserver process to appear ...
	I0907 00:51:25.413877   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:25.413895   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.168577   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.168617   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.168629   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.228753   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:29.228785   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:29.729501   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:29.735318   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:29.735345   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:26.886341   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886797   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | unable to find current IP address of domain default-k8s-diff-port-773466 in network mk-default-k8s-diff-port-773466
	I0907 00:51:26.886819   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | I0907 00:51:26.886742   47907 retry.go:31] will retry after 3.776269501s: waiting for machine to come up
	I0907 00:51:30.665170   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.665736   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Found IP for machine: 192.168.39.96
	I0907 00:51:30.665770   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserving static IP address...
	I0907 00:51:30.665788   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has current primary IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.666183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.666226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | skip adding static IP to network mk-default-k8s-diff-port-773466 - found existing host DHCP lease matching {name: "default-k8s-diff-port-773466", mac: "52:54:00:61:2c:44", ip: "192.168.39.96"}
	I0907 00:51:30.666245   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Reserved static IP address: 192.168.39.96
	I0907 00:51:30.666262   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Waiting for SSH to be available...
	I0907 00:51:30.666279   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Getting to WaitForSSH function...
	I0907 00:51:30.668591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.229871   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.240735   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:30.240764   46768 api_server.go:103] status: https://192.168.61.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:30.729911   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:51:30.736989   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:51:30.746939   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:30.746964   46768 api_server.go:131] duration metric: took 5.333080985s to wait for apiserver health ...
	I0907 00:51:30.746973   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:51:30.746979   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:30.748709   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:32.716941   46354 start.go:369] acquired machines lock for "old-k8s-version-940806" in 56.927952192s
	I0907 00:51:32.717002   46354 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:51:32.717014   46354 fix.go:54] fixHost starting: 
	I0907 00:51:32.717431   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:32.717466   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:32.735021   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I0907 00:51:32.735485   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:32.736057   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:51:32.736083   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:32.736457   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:32.736713   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:32.736903   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:51:32.738719   46354 fix.go:102] recreateIfNeeded on old-k8s-version-940806: state=Stopped err=<nil>
	I0907 00:51:32.738743   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	W0907 00:51:32.738924   46354 fix.go:128] unexpected machine state, will restart: <nil>
	I0907 00:51:32.740721   46354 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-940806" ...
	I0907 00:51:32.742202   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Start
	I0907 00:51:32.742362   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring networks are active...
	I0907 00:51:32.743087   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network default is active
	I0907 00:51:32.743499   46354 main.go:141] libmachine: (old-k8s-version-940806) Ensuring network mk-old-k8s-version-940806 is active
	I0907 00:51:32.743863   46354 main.go:141] libmachine: (old-k8s-version-940806) Getting domain xml...
	I0907 00:51:32.744603   46354 main.go:141] libmachine: (old-k8s-version-940806) Creating domain...
	I0907 00:51:30.668969   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.670773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.670838   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH client type: external
	I0907 00:51:30.670876   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa (-rw-------)
	I0907 00:51:30.670918   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:30.670934   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | About to run SSH command:
	I0907 00:51:30.670947   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | exit 0
	I0907 00:51:30.770939   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:30.771333   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetConfigRaw
	I0907 00:51:30.772100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:30.775128   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775616   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.775654   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.775923   47297 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/config.json ...
	I0907 00:51:30.776161   47297 machine.go:88] provisioning docker machine ...
	I0907 00:51:30.776180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:30.776399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776597   47297 buildroot.go:166] provisioning hostname "default-k8s-diff-port-773466"
	I0907 00:51:30.776618   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:30.776805   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.779367   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.779761   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.779793   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.780022   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.780238   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780399   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.780534   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.780687   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.781088   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.781102   47297 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-773466 && echo "default-k8s-diff-port-773466" | sudo tee /etc/hostname
	I0907 00:51:30.932287   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-773466
	
	I0907 00:51:30.932320   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:30.935703   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936111   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:30.936146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:30.936324   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:30.936647   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.936851   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:30.937054   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:30.937266   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:30.937890   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:30.937932   47297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-773466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-773466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-773466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:31.091619   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:31.091654   47297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:31.091707   47297 buildroot.go:174] setting up certificates
	I0907 00:51:31.091724   47297 provision.go:83] configureAuth start
	I0907 00:51:31.091746   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetMachineName
	I0907 00:51:31.092066   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:31.095183   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095670   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.095710   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.095861   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.098597   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.098887   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.098962   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.099205   47297 provision.go:138] copyHostCerts
	I0907 00:51:31.099275   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:31.099291   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:31.099362   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:31.099516   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:31.099531   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:31.099563   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:31.099658   47297 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:31.099671   47297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:31.099700   47297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:31.099807   47297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-773466 san=[192.168.39.96 192.168.39.96 localhost 127.0.0.1 minikube default-k8s-diff-port-773466]
	I0907 00:51:31.793599   47297 provision.go:172] copyRemoteCerts
	I0907 00:51:31.793653   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:31.793676   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:31.796773   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797153   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:31.797192   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:31.797362   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:31.797578   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:31.797751   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:31.797865   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:31.903781   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:31.935908   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0907 00:51:31.967385   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:51:31.998542   47297 provision.go:86] duration metric: configureAuth took 906.744341ms
	I0907 00:51:31.998576   47297 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:31.998836   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:31.998941   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.002251   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.002747   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.002996   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.003300   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003531   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.003717   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.003996   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.004637   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.004662   47297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:32.413687   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:32.413765   47297 machine.go:91] provisioned docker machine in 1.637590059s
	I0907 00:51:32.413777   47297 start.go:300] post-start starting for "default-k8s-diff-port-773466" (driver="kvm2")
	I0907 00:51:32.413787   47297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:32.413823   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.414183   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:32.414227   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.417432   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.417894   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.417954   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.418202   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.418371   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.418517   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.418625   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.523519   47297 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:32.528959   47297 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:32.528983   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:32.529050   47297 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:32.529144   47297 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:32.529249   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:32.538827   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:32.569792   47297 start.go:303] post-start completed in 156.000078ms
	I0907 00:51:32.569819   47297 fix.go:56] fixHost completed within 20.830399155s
	I0907 00:51:32.569860   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.573180   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573599   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.573653   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.573846   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.574100   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574292   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.574470   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.574658   47297 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:32.575266   47297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I0907 00:51:32.575282   47297 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:32.716793   47297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047892.656226759
	
	I0907 00:51:32.716819   47297 fix.go:206] guest clock: 1694047892.656226759
	I0907 00:51:32.716829   47297 fix.go:219] Guest: 2023-09-07 00:51:32.656226759 +0000 UTC Remote: 2023-09-07 00:51:32.569839112 +0000 UTC m=+181.933138455 (delta=86.387647ms)
	I0907 00:51:32.716855   47297 fix.go:190] guest clock delta is within tolerance: 86.387647ms
	I0907 00:51:32.716868   47297 start.go:83] releasing machines lock for "default-k8s-diff-port-773466", held for 20.977496549s
	I0907 00:51:32.716900   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.717205   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:32.720353   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.720794   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.720825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.721001   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721675   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:51:32.721767   47297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:32.721813   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.721925   47297 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:32.721951   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:51:32.724909   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725154   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725464   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725510   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725626   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725808   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.725825   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:32.725845   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:32.725869   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:51:32.725967   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726058   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:51:32.726164   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.726216   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:51:32.726352   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:51:32.845353   47297 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:32.851616   47297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:33.005642   47297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:33.013527   47297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:33.013603   47297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:33.033433   47297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:33.033467   47297 start.go:466] detecting cgroup driver to use...
	I0907 00:51:33.033538   47297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:33.055861   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:33.073405   47297 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:33.073477   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:33.090484   47297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:33.104735   47297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:33.245072   47297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:33.411559   47297 docker.go:212] disabling docker service ...
	I0907 00:51:33.411625   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:33.429768   47297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:33.446597   47297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:33.581915   47297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:33.704648   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:33.721447   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:33.740243   47297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0907 00:51:33.740330   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.750871   47297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:33.750937   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.761620   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.774350   47297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:33.787718   47297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:33.802740   47297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:33.814899   47297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:33.814975   47297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:33.832422   47297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:33.844513   47297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:34.020051   47297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:34.252339   47297 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:34.252415   47297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:34.258055   47297 start.go:534] Will wait 60s for crictl version
	I0907 00:51:34.258179   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:51:34.262511   47297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:34.304552   47297 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:34.304626   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.376009   47297 ssh_runner.go:195] Run: crio --version
	I0907 00:51:34.448097   47297 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0907 00:51:29.972856   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.178016   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.291593   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:30.385791   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:30.385865   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.404991   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:30.926995   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.427043   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:31.927049   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.426422   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.927274   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:32.955713   46833 api_server.go:72] duration metric: took 2.569919035s to wait for apiserver process to appear ...
	I0907 00:51:32.955739   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:32.955757   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.956284   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:32.956316   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:32.957189   46833 api_server.go:269] stopped: https://192.168.50.242:8443/healthz: Get "https://192.168.50.242:8443/healthz": dial tcp 192.168.50.242:8443: connect: connection refused
	I0907 00:51:33.457905   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:30.750097   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:30.784742   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:30.828002   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:30.852490   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:30.852534   46768 system_pods.go:61] "coredns-5dd5756b68-6ndjc" [8f1f8224-b8b4-4fb6-8f6b-2f4a0fb18e17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:30.852547   46768 system_pods.go:61] "etcd-no-preload-321164" [c4b2427c-d882-4d29-af41-553961e5ee48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:30.852559   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [339ca32b-a5a1-474c-a5db-c35e7f87506d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:30.852569   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [36241c8a-13ce-4e68-887b-ed929258d688] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:30.852581   46768 system_pods.go:61] "kube-proxy-f7dm4" [69308cf3-c18e-4edb-b0ea-c7f34a51aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:30.852595   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [e9b14f0e-7789-4d1d-9a15-02c88d4a1e3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:30.852606   46768 system_pods.go:61] "metrics-server-57f55c9bc5-s95n2" [938af7b2-936b-495c-84c9-d580ae646926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:30.852622   46768 system_pods.go:61] "storage-provisioner" [70c690a6-a383-4b3f-9817-954056580009] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:30.852633   46768 system_pods.go:74] duration metric: took 24.608458ms to wait for pod list to return data ...
	I0907 00:51:30.852646   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:30.860785   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:30.860811   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:30.860821   46768 node_conditions.go:105] duration metric: took 8.167675ms to run NodePressure ...
	I0907 00:51:30.860837   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:31.343033   46768 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349908   46768 kubeadm.go:787] kubelet initialised
	I0907 00:51:31.349936   46768 kubeadm.go:788] duration metric: took 6.87538ms waiting for restarted kubelet to initialise ...
	I0907 00:51:31.349944   46768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:31.366931   46768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:33.392559   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:34.449546   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetIP
	I0907 00:51:34.452803   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453196   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:51:34.453226   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:51:34.453551   47297 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:34.459166   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:34.475045   47297 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0907 00:51:34.475159   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:34.525380   47297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0907 00:51:34.525495   47297 ssh_runner.go:195] Run: which lz4
	I0907 00:51:34.530921   47297 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:34.537992   47297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:34.538062   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0907 00:51:34.298412   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting to get IP...
	I0907 00:51:34.299510   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.300108   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.300166   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.300103   48085 retry.go:31] will retry after 237.599934ms: waiting for machine to come up
	I0907 00:51:34.539798   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.540306   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.540406   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.540348   48085 retry.go:31] will retry after 321.765824ms: waiting for machine to come up
	I0907 00:51:34.864120   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:34.864735   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:34.864761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:34.864698   48085 retry.go:31] will retry after 485.375139ms: waiting for machine to come up
	I0907 00:51:35.351583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.352142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.352174   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.352081   48085 retry.go:31] will retry after 490.428576ms: waiting for machine to come up
	I0907 00:51:35.844432   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:35.844896   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:35.844921   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:35.844821   48085 retry.go:31] will retry after 610.440599ms: waiting for machine to come up
	I0907 00:51:36.456988   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:36.457697   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:36.457720   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:36.457634   48085 retry.go:31] will retry after 704.547341ms: waiting for machine to come up
	I0907 00:51:37.163551   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.163973   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.164001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.163926   48085 retry.go:31] will retry after 825.931424ms: waiting for machine to come up
	I0907 00:51:37.991936   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:37.992550   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:37.992583   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:37.992489   48085 retry.go:31] will retry after 952.175868ms: waiting for machine to come up
	I0907 00:51:37.065943   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.065973   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.065987   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.176178   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:37.176213   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:37.457739   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.464386   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.464423   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:37.958094   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:37.966530   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:37.966561   46833 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:38.458170   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:51:38.465933   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:51:38.477109   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:51:38.477135   46833 api_server.go:131] duration metric: took 5.521389594s to wait for apiserver health ...
	I0907 00:51:38.477143   46833 cni.go:84] Creating CNI manager for ""
	I0907 00:51:38.477149   46833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:38.478964   46833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:38.480383   46833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:51:38.509844   46833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:51:38.549403   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:51:38.571430   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:51:38.571472   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:51:38.571491   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:51:38.571503   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:51:38.571563   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:51:38.571575   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:51:38.571592   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:51:38.571602   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:51:38.571613   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:51:38.571626   46833 system_pods.go:74] duration metric: took 22.19998ms to wait for pod list to return data ...
	I0907 00:51:38.571637   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:51:38.581324   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:51:38.581361   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:51:38.581373   46833 node_conditions.go:105] duration metric: took 9.730463ms to run NodePressure ...
	I0907 00:51:38.581393   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:39.140602   46833 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:51:39.147994   46833 kubeadm.go:787] kubelet initialised
	I0907 00:51:39.148025   46833 kubeadm.go:788] duration metric: took 7.397807ms waiting for restarted kubelet to initialise ...
	I0907 00:51:39.148034   46833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:39.157241   46833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.172898   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172935   46833 pod_ready.go:81] duration metric: took 15.665673ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.172947   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.172958   46833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.180630   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180666   46833 pod_ready.go:81] duration metric: took 7.698054ms waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.180679   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "etcd-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.180692   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.202626   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202658   46833 pod_ready.go:81] duration metric: took 21.956163ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.202671   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.202699   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.210817   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210849   46833 pod_ready.go:81] duration metric: took 8.138129ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.210860   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.210882   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:39.801924   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801951   46833 pod_ready.go:81] duration metric: took 591.060955ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:39.801963   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-proxy-47255" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:39.801970   46833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:35.403877   46768 pod_ready.go:102] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.394774   46768 pod_ready.go:92] pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:36.394823   46768 pod_ready.go:81] duration metric: took 5.027852065s waiting for pod "coredns-5dd5756b68-6ndjc" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:36.394839   46768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:38.429614   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:36.550649   47297 crio.go:444] Took 2.019779 seconds to copy over tarball
	I0907 00:51:36.550726   47297 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:51:40.133828   47297 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.583074443s)
	I0907 00:51:40.133861   47297 crio.go:451] Took 3.583177 seconds to extract the tarball
	I0907 00:51:40.133872   47297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:51:40.177675   47297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:40.230574   47297 crio.go:496] all images are preloaded for cri-o runtime.
	I0907 00:51:40.230594   47297 cache_images.go:84] Images are preloaded, skipping loading
	I0907 00:51:40.230654   47297 ssh_runner.go:195] Run: crio config
	I0907 00:51:40.296445   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:51:40.296473   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:51:40.296497   47297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:51:40.296519   47297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.96 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-773466 NodeName:default-k8s-diff-port-773466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:51:40.296709   47297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-773466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:51:40.296793   47297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-773466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0907 00:51:40.296850   47297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0907 00:51:40.307543   47297 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:51:40.307642   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:51:40.318841   47297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0907 00:51:40.337125   47297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:51:40.354910   47297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0907 00:51:40.375283   47297 ssh_runner.go:195] Run: grep 192.168.39.96	control-plane.minikube.internal$ /etc/hosts
	I0907 00:51:40.380206   47297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:40.394943   47297 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466 for IP: 192.168.39.96
	I0907 00:51:40.394980   47297 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.395194   47297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:51:40.395231   47297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:51:40.395295   47297 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.key
	I0907 00:51:40.410649   47297 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key.e8bbde58
	I0907 00:51:40.410724   47297 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key
	I0907 00:51:40.410868   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:51:40.410904   47297 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:51:40.410916   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:51:40.410942   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:51:40.410963   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:51:40.410985   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:51:40.411038   47297 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:40.411575   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:51:40.441079   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:51:40.465854   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:51:40.495221   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:51:40.521493   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:51:40.548227   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:51:40.574366   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:51:40.599116   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:51:40.624901   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:51:40.650606   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:51:40.690154   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690183   46833 pod_ready.go:81] duration metric: took 888.205223ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.690194   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.690204   46833 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:40.697723   46833 pod_ready.go:97] node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697750   46833 pod_ready.go:81] duration metric: took 7.538932ms waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:51:40.697761   46833 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-546209" hosting pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:40.697773   46833 pod_ready.go:38] duration metric: took 1.549726748s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:40.697793   46833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:51:40.709255   46833 ops.go:34] apiserver oom_adj: -16
	I0907 00:51:40.709281   46833 kubeadm.go:640] restartCluster took 21.865468537s
	I0907 00:51:40.709290   46833 kubeadm.go:406] StartCluster complete in 21.917781616s
	I0907 00:51:40.709309   46833 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.709403   46833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:51:40.712326   46833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:51:40.808025   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:51:40.808158   46833 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:51:40.808236   46833 config.go:182] Loaded profile config "embed-certs-546209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:51:40.808285   46833 addons.go:69] Setting metrics-server=true in profile "embed-certs-546209"
	I0907 00:51:40.808309   46833 addons.go:231] Setting addon metrics-server=true in "embed-certs-546209"
	W0907 00:51:40.808317   46833 addons.go:240] addon metrics-server should already be in state true
	I0907 00:51:40.808252   46833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-546209"
	I0907 00:51:40.808340   46833 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-546209"
	W0907 00:51:40.808354   46833 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:51:40.808375   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808390   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:40.808257   46833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-546209"
	I0907 00:51:40.808493   46833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-546209"
	I0907 00:51:40.809864   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.809936   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810411   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810477   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.810518   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.810526   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.827159   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0907 00:51:40.827608   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0907 00:51:40.827784   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828059   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.828326   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828354   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828556   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.828579   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.828955   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829067   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.829670   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.829715   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.829932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.831070   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0907 00:51:40.831543   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.832142   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.832161   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.832527   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.834743   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:40.834801   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:40.853510   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0907 00:51:40.854194   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0907 00:51:40.854261   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.854987   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855019   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.855102   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:40.855381   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.855745   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.855791   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:40.855808   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:40.856430   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:40.856882   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:40.858468   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.154848   46833 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:51:40.859116   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:41.300012   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:51:41.362259   46833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:41.362296   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:51:41.362332   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.460930   46833 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.460961   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:51:41.460988   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:41.464836   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465151   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465419   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465455   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465590   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:41.465621   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:41.465764   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465908   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:41.465979   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466055   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:41.466150   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466196   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:41.466276   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.466309   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:41.587470   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:51:41.594683   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:51:41.594709   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:51:41.621438   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:51:41.621471   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:51:41.664886   46833 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.664910   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:51:41.691795   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:51:41.886942   46833 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.078877765s)
	I0907 00:51:41.887038   46833 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:51:41.898851   46833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-546209" context rescaled to 1 replicas
	I0907 00:51:41.898900   46833 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:51:42.014441   46833 out.go:177] * Verifying Kubernetes components...
	I0907 00:51:38.946740   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:38.947268   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:38.947292   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:38.947211   48085 retry.go:31] will retry after 1.334104337s: waiting for machine to come up
	I0907 00:51:40.282730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:40.283209   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:40.283233   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:40.283168   48085 retry.go:31] will retry after 1.521256667s: waiting for machine to come up
	I0907 00:51:41.806681   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:41.807182   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:41.807211   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:41.807126   48085 retry.go:31] will retry after 1.907600342s: waiting for machine to come up
	I0907 00:51:42.132070   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:51:42.150876   46833 addons.go:231] Setting addon default-storageclass=true in "embed-certs-546209"
	W0907 00:51:42.150905   46833 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:51:42.150935   46833 host.go:66] Checking if "embed-certs-546209" exists ...
	I0907 00:51:42.151329   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.151357   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.172605   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0907 00:51:42.173122   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.173662   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.173709   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.174155   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.174813   46833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:51:42.174877   46833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:51:42.196701   46833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0907 00:51:42.197287   46833 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:51:42.197859   46833 main.go:141] libmachine: Using API Version  1
	I0907 00:51:42.197882   46833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:51:42.198246   46833 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:51:42.198418   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetState
	I0907 00:51:42.200558   46833 main.go:141] libmachine: (embed-certs-546209) Calling .DriverName
	I0907 00:51:42.200942   46833 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:42.200954   46833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:51:42.200967   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHHostname
	I0907 00:51:42.204259   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.204952   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHPort
	I0907 00:51:42.204975   46833 main.go:141] libmachine: (embed-certs-546209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:b3:6a", ip: ""} in network mk-embed-certs-546209: {Iface:virbr4 ExpiryTime:2023-09-07 01:51:03 +0000 UTC Type:0 Mac:52:54:00:96:b3:6a Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:embed-certs-546209 Clientid:01:52:54:00:96:b3:6a}
	I0907 00:51:42.205009   46833 main.go:141] libmachine: (embed-certs-546209) DBG | domain embed-certs-546209 has defined IP address 192.168.50.242 and MAC address 52:54:00:96:b3:6a in network mk-embed-certs-546209
	I0907 00:51:42.205139   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHKeyPath
	I0907 00:51:42.205280   46833 main.go:141] libmachine: (embed-certs-546209) Calling .GetSSHUsername
	I0907 00:51:42.205405   46833 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/embed-certs-546209/id_rsa Username:docker}
	I0907 00:51:42.377838   46833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:51:43.286666   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.699154782s)
	I0907 00:51:43.286720   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.286734   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.287148   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.287174   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.287190   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.287210   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.287220   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.288970   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.289008   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.289021   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.436691   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.744844788s)
	I0907 00:51:43.436717   46833 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.304610389s)
	I0907 00:51:43.436744   46833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:43.436758   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436775   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.436862   46833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05899604s)
	I0907 00:51:43.436883   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.436893   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438856   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.438887   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438903   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.438907   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438914   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.438919   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438924   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.438932   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.438934   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439020   46833 main.go:141] libmachine: (embed-certs-546209) DBG | Closing plugin on server side
	I0907 00:51:43.439206   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439219   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439231   46833 addons.go:467] Verifying addon metrics-server=true in "embed-certs-546209"
	I0907 00:51:43.439266   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439277   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.439290   46833 main.go:141] libmachine: Making call to close driver server
	I0907 00:51:43.439299   46833 main.go:141] libmachine: (embed-certs-546209) Calling .Close
	I0907 00:51:43.439502   46833 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:51:43.439513   46833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:51:43.442917   46833 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0907 00:51:43.444226   46833 addons.go:502] enable addons completed in 2.636061813s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0907 00:51:40.924494   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:42.925582   46768 pod_ready.go:102] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:40.679951   47297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:51:40.859542   47297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:51:40.881658   47297 ssh_runner.go:195] Run: openssl version
	I0907 00:51:40.888518   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:51:40.902200   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908038   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.908106   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:51:40.914418   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:51:40.927511   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:51:40.941360   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947556   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.947622   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:51:40.953780   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:51:40.966576   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:51:40.981447   47297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989719   47297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:51:40.989779   47297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:51:41.000685   47297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:51:41.017936   47297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:51:41.023280   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:51:41.029915   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:51:41.038011   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:51:41.044570   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:51:41.052534   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:51:41.060580   47297 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:51:41.068664   47297 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-773466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-773466 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:51:41.068776   47297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:51:41.068897   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:41.111849   47297 cri.go:89] found id: ""
	I0907 00:51:41.111923   47297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:51:41.126171   47297 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:51:41.126193   47297 kubeadm.go:636] restartCluster start
	I0907 00:51:41.126249   47297 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:51:41.138401   47297 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.139882   47297 kubeconfig.go:92] found "default-k8s-diff-port-773466" server: "https://192.168.39.96:8444"
	I0907 00:51:41.142907   47297 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:51:41.154285   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.154346   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.168992   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.169012   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.169057   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.183283   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:41.683942   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:41.684036   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:41.701647   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.183800   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.183882   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.213176   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:42.683460   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:42.683550   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:42.701805   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.184099   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.184206   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.202359   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.683466   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:43.683541   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:43.697133   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.183663   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.183750   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.201236   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:44.684320   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:44.684411   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:44.698198   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:45.183451   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.183533   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.197529   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:43.716005   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:43.716632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:43.716668   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:43.716570   48085 retry.go:31] will retry after 3.526983217s: waiting for machine to come up
	I0907 00:51:47.245213   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:47.245615   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:47.245645   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:47.245561   48085 retry.go:31] will retry after 3.453934877s: waiting for machine to come up
	I0907 00:51:45.450760   46833 node_ready.go:58] node "embed-certs-546209" has status "Ready":"False"
	I0907 00:51:47.949024   46833 node_ready.go:49] node "embed-certs-546209" has status "Ready":"True"
	I0907 00:51:47.949053   46833 node_ready.go:38] duration metric: took 4.512298071s waiting for node "embed-certs-546209" to be "Ready" ...
	I0907 00:51:47.949063   46833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:51:47.956755   46833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964323   46833 pod_ready.go:92] pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:47.964345   46833 pod_ready.go:81] duration metric: took 7.56298ms waiting for pod "coredns-5dd5756b68-vrgm9" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.964356   46833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425347   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.425370   46768 pod_ready.go:81] duration metric: took 9.030524984s waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.425380   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432508   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.432531   46768 pod_ready.go:81] duration metric: took 7.145112ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.432545   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441245   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.441265   46768 pod_ready.go:81] duration metric: took 8.713177ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.441275   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446603   46768 pod_ready.go:92] pod "kube-proxy-f7dm4" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.446627   46768 pod_ready.go:81] duration metric: took 5.346628ms waiting for pod "kube-proxy-f7dm4" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.446641   46768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453061   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:45.453091   46768 pod_ready.go:81] duration metric: took 6.442457ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:45.453104   46768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:47.730093   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:45.684191   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:45.684287   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:45.702020   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.183587   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.183697   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.201390   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:46.683442   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:46.683519   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:46.699015   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.183908   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.183998   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.196617   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:47.683929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:47.683991   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:47.696499   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.183929   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.184000   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.197425   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:48.683932   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:48.684019   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:48.696986   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.184149   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.184224   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.197363   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:49.684066   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:49.684152   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:49.697853   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.183372   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.183490   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.195818   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:50.700500   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:50.700920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | unable to find current IP address of domain old-k8s-version-940806 in network mk-old-k8s-version-940806
	I0907 00:51:50.700939   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | I0907 00:51:50.700882   48085 retry.go:31] will retry after 4.6319983s: waiting for machine to come up
	I0907 00:51:49.984505   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:51.987061   46833 pod_ready.go:102] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:53.485331   46833 pod_ready.go:92] pod "etcd-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.485356   46833 pod_ready.go:81] duration metric: took 5.520993929s waiting for pod "etcd-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.485368   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491351   46833 pod_ready.go:92] pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.491371   46833 pod_ready.go:81] duration metric: took 5.996687ms waiting for pod "kube-apiserver-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.491387   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496425   46833 pod_ready.go:92] pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.496448   46833 pod_ready.go:81] duration metric: took 5.054087ms waiting for pod "kube-controller-manager-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.496460   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504963   46833 pod_ready.go:92] pod "kube-proxy-47255" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.504982   46833 pod_ready.go:81] duration metric: took 8.515814ms waiting for pod "kube-proxy-47255" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.504990   46833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550180   46833 pod_ready.go:92] pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace has status "Ready":"True"
	I0907 00:51:53.550208   46833 pod_ready.go:81] duration metric: took 45.211992ms waiting for pod "kube-scheduler-embed-certs-546209" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:53.550222   46833 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	I0907 00:51:50.229069   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:52.233340   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:54.728824   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:50.683740   47297 api_server.go:166] Checking apiserver status ...
	I0907 00:51:50.683806   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:51:50.695528   47297 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:51:51.154940   47297 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:51:51.154990   47297 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:51:51.155002   47297 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:51:51.155052   47297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:51:51.190293   47297 cri.go:89] found id: ""
	I0907 00:51:51.190351   47297 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:51:51.207237   47297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:51:51.216623   47297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:51:51.216671   47297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226376   47297 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:51:51.226399   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.352763   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:51.879625   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.090367   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.169714   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:51:52.258757   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:51:52.258861   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.274881   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:52.799083   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.298600   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:53.798807   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.299419   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.798660   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:51:54.824175   47297 api_server.go:72] duration metric: took 2.565415526s to wait for apiserver process to appear ...
	I0907 00:51:54.824203   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:51:54.824222   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:55.335922   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336311   46354 main.go:141] libmachine: (old-k8s-version-940806) Found IP for machine: 192.168.83.245
	I0907 00:51:55.336325   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserving static IP address...
	I0907 00:51:55.336336   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has current primary IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.336816   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.336872   46354 main.go:141] libmachine: (old-k8s-version-940806) Reserved static IP address: 192.168.83.245
	I0907 00:51:55.336893   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | skip adding static IP to network mk-old-k8s-version-940806 - found existing host DHCP lease matching {name: "old-k8s-version-940806", mac: "52:54:00:1f:83:50", ip: "192.168.83.245"}
	I0907 00:51:55.336909   46354 main.go:141] libmachine: (old-k8s-version-940806) Waiting for SSH to be available...
	I0907 00:51:55.336919   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Getting to WaitForSSH function...
	I0907 00:51:55.339323   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339730   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.339768   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.339880   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH client type: external
	I0907 00:51:55.339907   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa (-rw-------)
	I0907 00:51:55.339946   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:51:55.339964   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | About to run SSH command:
	I0907 00:51:55.340001   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | exit 0
	I0907 00:51:55.483023   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | SSH cmd err, output: <nil>: 
	I0907 00:51:55.483362   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetConfigRaw
	I0907 00:51:55.484121   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.487091   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487590   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.487621   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.487863   46354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/config.json ...
	I0907 00:51:55.488067   46354 machine.go:88] provisioning docker machine ...
	I0907 00:51:55.488088   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:55.488332   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488525   46354 buildroot.go:166] provisioning hostname "old-k8s-version-940806"
	I0907 00:51:55.488551   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.488707   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.491136   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491567   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.491600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.491818   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.491950   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492058   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.492133   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.492237   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.492685   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.492705   46354 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-940806 && echo "old-k8s-version-940806" | sudo tee /etc/hostname
	I0907 00:51:55.648589   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-940806
	
	I0907 00:51:55.648628   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.651624   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652046   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.652094   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.652282   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.652472   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652654   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.652813   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.652977   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:55.653628   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:55.653657   46354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-940806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-940806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-940806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:51:55.805542   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:51:55.805573   46354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6470/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6470/.minikube}
	I0907 00:51:55.805607   46354 buildroot.go:174] setting up certificates
	I0907 00:51:55.805617   46354 provision.go:83] configureAuth start
	I0907 00:51:55.805629   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetMachineName
	I0907 00:51:55.805907   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:55.808800   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809142   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.809175   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.809299   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.811385   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811785   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.811812   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.811980   46354 provision.go:138] copyHostCerts
	I0907 00:51:55.812089   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem, removing ...
	I0907 00:51:55.812104   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem
	I0907 00:51:55.812172   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/ca.pem (1082 bytes)
	I0907 00:51:55.812287   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem, removing ...
	I0907 00:51:55.812297   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem
	I0907 00:51:55.812321   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/cert.pem (1123 bytes)
	I0907 00:51:55.812418   46354 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem, removing ...
	I0907 00:51:55.812427   46354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem
	I0907 00:51:55.812463   46354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6470/.minikube/key.pem (1679 bytes)
	I0907 00:51:55.812538   46354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-940806 san=[192.168.83.245 192.168.83.245 localhost 127.0.0.1 minikube old-k8s-version-940806]
	I0907 00:51:55.920274   46354 provision.go:172] copyRemoteCerts
	I0907 00:51:55.920327   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:51:55.920348   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:55.923183   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923599   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:55.923632   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:55.923816   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:55.924011   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:55.924174   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:55.924335   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.020317   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:51:56.048299   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0907 00:51:56.075483   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:51:56.101118   46354 provision.go:86] duration metric: configureAuth took 295.488336ms
	I0907 00:51:56.101150   46354 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:51:56.101338   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:51:56.101407   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.104235   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104600   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.104640   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.104878   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.105093   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105306   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.105495   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.105668   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.106199   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.106217   46354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:51:56.435571   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:51:56.435644   46354 machine.go:91] provisioned docker machine in 947.562946ms
	I0907 00:51:56.435662   46354 start.go:300] post-start starting for "old-k8s-version-940806" (driver="kvm2")
	I0907 00:51:56.435679   46354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:51:56.435712   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.436041   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:51:56.436083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.439187   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439537   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.439563   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.439888   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.440116   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.440285   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.440427   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.542162   46354 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:51:56.546357   46354 info.go:137] Remote host: Buildroot 2021.02.12
	I0907 00:51:56.546375   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/addons for local assets ...
	I0907 00:51:56.546435   46354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6470/.minikube/files for local assets ...
	I0907 00:51:56.546511   46354 filesync.go:149] local asset: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem -> 136572.pem in /etc/ssl/certs
	I0907 00:51:56.546648   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:51:56.556125   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:51:56.577844   46354 start.go:303] post-start completed in 142.166343ms
	I0907 00:51:56.577874   46354 fix.go:56] fixHost completed within 23.860860531s
	I0907 00:51:56.577898   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.580726   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581062   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.581090   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.581221   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.581540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581742   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.581909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.582113   46354 main.go:141] libmachine: Using SSH client type: native
	I0907 00:51:56.582532   46354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.83.245 22 <nil> <nil>}
	I0907 00:51:56.582553   46354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0907 00:51:56.715584   46354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694047916.695896692
	
	I0907 00:51:56.715607   46354 fix.go:206] guest clock: 1694047916.695896692
	I0907 00:51:56.715615   46354 fix.go:219] Guest: 2023-09-07 00:51:56.695896692 +0000 UTC Remote: 2023-09-07 00:51:56.57787864 +0000 UTC m=+363.381197654 (delta=118.018052ms)
	I0907 00:51:56.715632   46354 fix.go:190] guest clock delta is within tolerance: 118.018052ms
	I0907 00:51:56.715639   46354 start.go:83] releasing machines lock for "old-k8s-version-940806", held for 23.998669865s
	I0907 00:51:56.715658   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.715909   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:56.718637   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.718992   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.719030   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.719203   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719646   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719852   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:51:56.719935   46354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:51:56.719980   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.720050   46354 ssh_runner.go:195] Run: cat /version.json
	I0907 00:51:56.720068   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:51:56.722463   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722752   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.722809   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.722850   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723041   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723208   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723241   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:56.723282   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:56.723394   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:51:56.723406   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723599   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.723632   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:51:56.723797   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:51:56.723956   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:51:56.835700   46354 ssh_runner.go:195] Run: systemctl --version
	I0907 00:51:56.841554   46354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:51:56.988658   46354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:51:56.995421   46354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:51:56.995495   46354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:51:57.011588   46354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:51:57.011608   46354 start.go:466] detecting cgroup driver to use...
	I0907 00:51:57.011669   46354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:51:57.029889   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:51:57.043942   46354 docker.go:196] disabling cri-docker service (if available) ...
	I0907 00:51:57.044002   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:51:57.056653   46354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:51:57.069205   46354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:51:57.184510   46354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:51:57.323399   46354 docker.go:212] disabling docker service ...
	I0907 00:51:57.323477   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:51:57.336506   46354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:51:57.348657   46354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:51:57.464450   46354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:51:57.577763   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:51:57.590934   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:51:57.609445   46354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0907 00:51:57.609500   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.619112   46354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:51:57.619173   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.629272   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.638702   46354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:51:57.648720   46354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:51:57.659046   46354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:51:57.667895   46354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:51:57.667971   46354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:51:57.681673   46354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:51:57.690907   46354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:51:57.801113   46354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:51:57.978349   46354 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:51:57.978432   46354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:51:57.983665   46354 start.go:534] Will wait 60s for crictl version
	I0907 00:51:57.983714   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:51:57.988244   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:51:58.019548   46354 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0907 00:51:58.019616   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.068229   46354 ssh_runner.go:195] Run: crio --version
	I0907 00:51:58.118554   46354 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0907 00:51:58.120322   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetIP
	I0907 00:51:58.122944   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123321   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:51:58.123377   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:51:58.123569   46354 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0907 00:51:58.128115   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:51:58.140862   46354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0907 00:51:58.140933   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:51:58.182745   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:51:58.182829   46354 ssh_runner.go:195] Run: which lz4
	I0907 00:51:58.188491   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0907 00:51:58.193202   46354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:51:58.193237   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0907 00:51:55.862451   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.363582   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.511655   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.511686   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:58.511699   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:58.549405   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:51:58.549442   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:51:59.050120   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.057915   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.057946   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:51:59.550150   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:51:59.559928   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0907 00:51:59.559970   47297 api_server.go:103] status: https://192.168.39.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0907 00:52:00.050535   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:52:00.060556   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:52:00.069872   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:52:00.069898   47297 api_server.go:131] duration metric: took 5.245689478s to wait for apiserver health ...
	I0907 00:52:00.069906   47297 cni.go:84] Creating CNI manager for ""
	I0907 00:52:00.069911   47297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:00.071700   47297 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:51:56.730172   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:51:58.731973   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:00.073858   47297 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:00.098341   47297 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:00.120355   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:00.137820   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:52:00.137936   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:52:00.137967   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:52:00.137989   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:52:00.138007   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:52:00.138018   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:52:00.138032   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:52:00.138045   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:52:00.138058   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:52:00.138069   47297 system_pods.go:74] duration metric: took 17.695163ms to wait for pod list to return data ...
	I0907 00:52:00.138082   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:00.145755   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:00.145790   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:00.145803   47297 node_conditions.go:105] duration metric: took 7.711411ms to run NodePressure ...
	I0907 00:52:00.145825   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:00.468823   47297 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476107   47297 kubeadm.go:787] kubelet initialised
	I0907 00:52:00.476130   47297 kubeadm.go:788] duration metric: took 7.282541ms waiting for restarted kubelet to initialise ...
	I0907 00:52:00.476138   47297 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:00.483366   47297 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.495045   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495072   47297 pod_ready.go:81] duration metric: took 11.633116ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.495083   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.495092   47297 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.500465   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500488   47297 pod_ready.go:81] duration metric: took 5.386997ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.500498   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.500504   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.507318   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507392   47297 pod_ready.go:81] duration metric: took 6.878563ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.507416   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.507436   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.527784   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527820   47297 pod_ready.go:81] duration metric: took 20.36412ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.527833   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.527844   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:00.936895   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936926   47297 pod_ready.go:81] duration metric: took 409.073374ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:00.936938   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-proxy-5bh7n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:00.936947   47297 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.325746   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325777   47297 pod_ready.go:81] duration metric: took 388.819699ms waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.325787   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.325798   47297 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:01.725791   47297 pod_ready.go:97] node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725828   47297 pod_ready.go:81] duration metric: took 400.019773ms waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:52:01.725840   47297 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-773466" hosting pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:01.725852   47297 pod_ready.go:38] duration metric: took 1.249702286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:01.725871   47297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:52:01.742792   47297 ops.go:34] apiserver oom_adj: -16
	I0907 00:52:01.742816   47297 kubeadm.go:640] restartCluster took 20.616616394s
	I0907 00:52:01.742825   47297 kubeadm.go:406] StartCluster complete in 20.674170679s
	I0907 00:52:01.742843   47297 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.742936   47297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:52:01.744735   47297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:01.744998   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:52:01.745113   47297 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:52:01.745212   47297 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745218   47297 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745232   47297 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745240   47297 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:52:01.745232   47297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-773466"
	I0907 00:52:01.745268   47297 config.go:182] Loaded profile config "default-k8s-diff-port-773466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:52:01.745301   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745248   47297 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-773466"
	I0907 00:52:01.745432   47297 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.745442   47297 addons.go:240] addon metrics-server should already be in state true
	I0907 00:52:01.745489   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.745709   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745718   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745753   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745813   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.745895   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.745930   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.755156   47297 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-773466" context rescaled to 1 replicas
	I0907 00:52:01.755193   47297 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.96 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:52:01.757452   47297 out.go:177] * Verifying Kubernetes components...
	I0907 00:52:01.759076   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:52:01.763067   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0907 00:52:01.763578   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.764125   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.764147   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.764483   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.764668   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.764804   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0907 00:52:01.765385   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.765972   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.765988   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.766336   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.768468   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0907 00:52:01.768952   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.768985   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.769339   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.769827   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.769860   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.770129   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.770612   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.770641   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.782323   47297 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-773466"
	W0907 00:52:01.782353   47297 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:52:01.782387   47297 host.go:66] Checking if "default-k8s-diff-port-773466" exists ...
	I0907 00:52:01.782822   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.782858   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.788535   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0907 00:52:01.789169   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.789826   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.789845   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.790158   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0907 00:52:01.790340   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.790544   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.790616   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.791036   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.791055   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.791552   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.791726   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.793270   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.796517   47297 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:52:01.794011   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.798239   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:52:01.798266   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:52:01.798291   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800176   47297 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:51:59.928894   46354 crio.go:444] Took 1.740438 seconds to copy over tarball
	I0907 00:51:59.928974   46354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:52:03.105945   46354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.176929999s)
	I0907 00:52:03.105977   46354 crio.go:451] Took 3.177055 seconds to extract the tarball
	I0907 00:52:03.105987   46354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:52:03.150092   46354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:52:03.193423   46354 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0907 00:52:03.193450   46354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0907 00:52:03.193525   46354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.193544   46354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.193564   46354 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.193730   46354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.193799   46354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.193802   46354 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0907 00:52:03.193829   46354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.193736   46354 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.194948   46354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.195017   46354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.194949   46354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:03.195642   46354 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.195763   46354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.195814   46354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.195843   46354 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0907 00:52:03.195874   46354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:01.801952   47297 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.801969   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:52:01.801989   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.800897   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0907 00:52:01.801662   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802261   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.802286   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.802332   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.802683   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.802922   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.802961   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.803124   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.804246   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.804272   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.804654   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.804870   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805283   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.805314   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.805418   47297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:52:01.805448   47297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:52:01.805541   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.805723   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.805889   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.806052   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.822423   47297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0907 00:52:01.822847   47297 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:52:01.823441   47297 main.go:141] libmachine: Using API Version  1
	I0907 00:52:01.823459   47297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:52:01.823843   47297 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:52:01.824036   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetState
	I0907 00:52:01.825740   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .DriverName
	I0907 00:52:01.826032   47297 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:01.826051   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:52:01.826076   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHHostname
	I0907 00:52:01.829041   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829284   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:2c:44", ip: ""} in network mk-default-k8s-diff-port-773466: {Iface:virbr3 ExpiryTime:2023-09-07 01:51:24 +0000 UTC Type:0 Mac:52:54:00:61:2c:44 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:default-k8s-diff-port-773466 Clientid:01:52:54:00:61:2c:44}
	I0907 00:52:01.829310   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | domain default-k8s-diff-port-773466 has defined IP address 192.168.39.96 and MAC address 52:54:00:61:2c:44 in network mk-default-k8s-diff-port-773466
	I0907 00:52:01.829407   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHPort
	I0907 00:52:01.829591   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHKeyPath
	I0907 00:52:01.829712   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .GetSSHUsername
	I0907 00:52:01.830194   47297 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/default-k8s-diff-port-773466/id_rsa Username:docker}
	I0907 00:52:01.956646   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:52:01.956669   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:52:01.974183   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:52:01.978309   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:52:02.048672   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:52:02.048708   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:52:02.088069   47297 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:02.088099   47297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:52:02.142271   47297 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:02.142668   47297 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0907 00:52:02.197788   47297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:52:03.587076   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.612851341s)
	I0907 00:52:03.587130   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587146   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587147   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608805294s)
	I0907 00:52:03.587182   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587210   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587452   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587493   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587514   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587525   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587535   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.587495   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.587751   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587765   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587892   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.587905   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.587925   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.587935   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588252   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.588277   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588285   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.588297   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.588305   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.588543   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.588555   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648373   47297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.450538249s)
	I0907 00:52:03.648433   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648449   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.648789   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) DBG | Closing plugin on server side
	I0907 00:52:03.648824   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.648833   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.648848   47297 main.go:141] libmachine: Making call to close driver server
	I0907 00:52:03.648858   47297 main.go:141] libmachine: (default-k8s-diff-port-773466) Calling .Close
	I0907 00:52:03.649118   47297 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:52:03.649137   47297 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:52:03.649153   47297 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-773466"
	I0907 00:52:03.834785   47297 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:52:00.858996   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:02.861983   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:01.228807   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:03.229017   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:04.154749   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:04.260530   47297 addons.go:502] enable addons completed in 2.51536834s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:52:03.398538   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.480702   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.482201   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.482206   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0907 00:52:03.482815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.484155   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.484815   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.698892   46354 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0907 00:52:03.698936   46354 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.698938   46354 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0907 00:52:03.698965   46354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0907 00:52:03.699028   46354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.698975   46354 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0907 00:52:03.698982   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699069   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.699084   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.703734   46354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0907 00:52:03.703764   46354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.703796   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729259   46354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0907 00:52:03.729295   46354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.729331   46354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0907 00:52:03.729366   46354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.729373   46354 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0907 00:52:03.729394   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0907 00:52:03.729398   46354 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.729404   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729336   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729441   46354 ssh_runner.go:195] Run: which crictl
	I0907 00:52:03.729491   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0907 00:52:03.729519   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0907 00:52:03.729601   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0907 00:52:03.791169   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0907 00:52:03.814632   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0907 00:52:03.814660   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0907 00:52:03.814689   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0907 00:52:03.814747   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0907 00:52:03.814799   46354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0907 00:52:03.814839   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0907 00:52:03.814841   46354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876039   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0907 00:52:03.876095   46354 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0907 00:52:03.876082   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0907 00:52:03.876114   46354 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0907 00:52:03.876153   46354 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0907 00:52:03.876158   46354 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0907 00:52:04.549426   46354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:52:05.733437   46354 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.85724297s)
	I0907 00:52:05.733479   46354 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0907 00:52:05.733519   46354 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.184052604s)
	I0907 00:52:05.733568   46354 cache_images.go:92] LoadImages completed in 2.540103614s
	W0907 00:52:05.733639   46354 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17174-6470/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0907 00:52:05.733723   46354 ssh_runner.go:195] Run: crio config
	I0907 00:52:05.795752   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:05.795780   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:05.795801   46354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0907 00:52:05.795824   46354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-940806 NodeName:old-k8s-version-940806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0907 00:52:05.795975   46354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-940806"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-940806
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.245:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:52:05.796074   46354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-940806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0907 00:52:05.796135   46354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0907 00:52:05.807772   46354 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:52:05.807864   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:52:05.818185   46354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0907 00:52:05.835526   46354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:52:05.853219   46354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0907 00:52:05.873248   46354 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I0907 00:52:05.877640   46354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:52:05.890975   46354 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806 for IP: 192.168.83.245
	I0907 00:52:05.891009   46354 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb16c7f264857958ba7fcfc08c2912bcc23a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:52:05.891171   46354 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key
	I0907 00:52:05.891226   46354 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key
	I0907 00:52:05.891327   46354 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.key
	I0907 00:52:05.891407   46354 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key.8de8e89b
	I0907 00:52:05.891459   46354 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key
	I0907 00:52:05.891667   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem (1338 bytes)
	W0907 00:52:05.891713   46354 certs.go:433] ignoring /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657_empty.pem, impossibly tiny 0 bytes
	I0907 00:52:05.891729   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:52:05.891766   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:52:05.891801   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:52:05.891836   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/certs/home/jenkins/minikube-integration/17174-6470/.minikube/certs/key.pem (1679 bytes)
	I0907 00:52:05.891913   46354 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem (1708 bytes)
	I0907 00:52:05.892547   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0907 00:52:05.917196   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:52:05.942387   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:52:05.965551   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:52:05.987658   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:52:06.012449   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:52:06.037055   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:52:06.061051   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:52:06.085002   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:52:06.109132   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/certs/13657.pem --> /usr/share/ca-certificates/13657.pem (1338 bytes)
	I0907 00:52:06.132091   46354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/ssl/certs/136572.pem --> /usr/share/ca-certificates/136572.pem (1708 bytes)
	I0907 00:52:06.155215   46354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:52:06.173122   46354 ssh_runner.go:195] Run: openssl version
	I0907 00:52:06.178736   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136572.pem && ln -fs /usr/share/ca-certificates/136572.pem /etc/ssl/certs/136572.pem"
	I0907 00:52:06.189991   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194548   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:48 /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.194596   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136572.pem
	I0907 00:52:06.200538   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136572.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:52:06.212151   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:52:06.224356   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.229976   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.230037   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:52:06.236389   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:52:06.248369   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13657.pem && ln -fs /usr/share/ca-certificates/13657.pem /etc/ssl/certs/13657.pem"
	I0907 00:52:06.259325   46354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264451   46354 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:48 /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.264514   46354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13657.pem
	I0907 00:52:06.270564   46354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13657.pem /etc/ssl/certs/51391683.0"
	I0907 00:52:06.282506   46354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0907 00:52:06.287280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:52:06.293280   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:52:06.299272   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:52:06.305342   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:52:06.311194   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:52:06.317634   46354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:52:06.323437   46354 kubeadm.go:404] StartCluster: {Name:old-k8s-version-940806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-940806 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0907 00:52:06.323591   46354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:52:06.323668   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:06.358285   46354 cri.go:89] found id: ""
	I0907 00:52:06.358357   46354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:52:06.368975   46354 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0907 00:52:06.368997   46354 kubeadm.go:636] restartCluster start
	I0907 00:52:06.369060   46354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:52:06.379841   46354 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.380906   46354 kubeconfig.go:92] found "old-k8s-version-940806" server: "https://192.168.83.245:8443"
	I0907 00:52:06.383428   46354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:52:06.393862   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.393912   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.406922   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.406947   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.406995   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.419930   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:06.920685   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:06.920763   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:06.934327   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.420551   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.420652   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.438377   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:07.920500   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:07.920598   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:07.936835   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:05.363807   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.869141   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:05.229666   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:07.729895   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:09.731464   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:06.656552   47297 node_ready.go:58] node "default-k8s-diff-port-773466" has status "Ready":"False"
	I0907 00:52:09.155326   47297 node_ready.go:49] node "default-k8s-diff-port-773466" has status "Ready":"True"
	I0907 00:52:09.155347   47297 node_ready.go:38] duration metric: took 7.013040488s waiting for node "default-k8s-diff-port-773466" to be "Ready" ...
	I0907 00:52:09.155355   47297 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:52:09.164225   47297 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170406   47297 pod_ready.go:92] pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.170437   47297 pod_ready.go:81] duration metric: took 6.189088ms waiting for pod "coredns-5dd5756b68-wdnpc" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.170450   47297 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178363   47297 pod_ready.go:92] pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.178390   47297 pod_ready.go:81] duration metric: took 7.932283ms waiting for pod "etcd-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.178403   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184875   47297 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.184891   47297 pod_ready.go:81] duration metric: took 6.482032ms waiting for pod "kube-apiserver-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.184900   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192246   47297 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.192265   47297 pod_ready.go:81] duration metric: took 7.359919ms waiting for pod "kube-controller-manager-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.192274   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556032   47297 pod_ready.go:92] pod "kube-proxy-5bh7n" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:09.556064   47297 pod_ready.go:81] duration metric: took 363.783194ms waiting for pod "kube-proxy-5bh7n" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:09.556077   47297 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:08.420749   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.420813   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.434111   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:08.920795   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:08.920891   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:08.934515   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.420076   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.420167   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.433668   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:09.920090   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:09.920185   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:09.934602   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.420086   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.420186   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.434617   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.920124   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:10.920196   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:10.933372   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.420990   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.421072   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.435087   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:11.920579   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:11.920653   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:11.933614   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.420100   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.420192   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.434919   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:12.920816   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:12.920911   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:12.934364   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:10.357508   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.357966   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.358965   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.227826   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:14.228106   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:11.862581   47297 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:12.363573   47297 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace has status "Ready":"True"
	I0907 00:52:12.363593   47297 pod_ready.go:81] duration metric: took 2.807509276s waiting for pod "kube-scheduler-default-k8s-diff-port-773466" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:12.363602   47297 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	I0907 00:52:14.763624   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:13.420355   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.420427   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.434047   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:13.920675   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:13.920757   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:13.933725   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.420169   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.420244   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.433012   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:14.920490   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:14.920603   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:14.934208   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.420724   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.420807   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.433542   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:15.920040   46354 api_server.go:166] Checking apiserver status ...
	I0907 00:52:15.920114   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0907 00:52:15.933104   46354 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:52:16.394845   46354 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0907 00:52:16.394878   46354 kubeadm.go:1128] stopping kube-system containers ...
	I0907 00:52:16.394891   46354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:52:16.394939   46354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:52:16.430965   46354 cri.go:89] found id: ""
	I0907 00:52:16.431029   46354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:52:16.449241   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:52:16.459891   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:52:16.459973   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470006   46354 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0907 00:52:16.470033   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:16.591111   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.262647   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.481491   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.601432   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:17.722907   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:52:17.723000   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:17.735327   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:16.360886   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.860619   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:16.230019   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.230274   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:17.262772   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:19.264986   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:18.254002   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:18.753686   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.253956   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:52:19.290590   46354 api_server.go:72] duration metric: took 1.567681708s to wait for apiserver process to appear ...
	I0907 00:52:19.290614   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:52:19.290632   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291177   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.291217   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:19.291691   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": dial tcp 192.168.83.245:8443: connect: connection refused
	I0907 00:52:19.792323   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:21.357716   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:23.358355   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:20.728569   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:22.730042   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:21.763571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.264990   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:24.793514   46354 api_server.go:269] stopped: https://192.168.83.245:8443/healthz: Get "https://192.168.83.245:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0907 00:52:24.793568   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:24.939397   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:52:24.939429   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:52:25.292624   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.350968   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.351004   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:25.792573   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:25.799666   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0907 00:52:25.799697   46354 api_server.go:103] status: https://192.168.83.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0907 00:52:26.292258   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:52:26.301200   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:52:26.313982   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:52:26.314007   46354 api_server.go:131] duration metric: took 7.023387143s to wait for apiserver health ...
	I0907 00:52:26.314016   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:52:26.314021   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:52:26.316011   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:52:26.317496   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:52:26.335726   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:52:26.373988   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:52:26.393836   46354 system_pods.go:59] 7 kube-system pods found
	I0907 00:52:26.393861   46354 system_pods.go:61] "coredns-5644d7b6d9-56l68" [ab956d84-2998-42a4-b9ed-b71bc43c9730] Running
	I0907 00:52:26.393866   46354 system_pods.go:61] "etcd-old-k8s-version-940806" [6234bc4e-66d0-4fb6-8631-b45ee56b774c] Running
	I0907 00:52:26.393870   46354 system_pods.go:61] "kube-apiserver-old-k8s-version-940806" [303d2368-1964-4bdb-9d46-91602d6c52b4] Running
	I0907 00:52:26.393875   46354 system_pods.go:61] "kube-controller-manager-old-k8s-version-940806" [7a193f1e-8650-453b-bfa5-d4af3a8bfbc3] Running
	I0907 00:52:26.393878   46354 system_pods.go:61] "kube-proxy-2d8pb" [1689f3e9-0487-422e-a450-9c96595cea00] Running
	I0907 00:52:26.393882   46354 system_pods.go:61] "kube-scheduler-old-k8s-version-940806" [cbd69cd2-3fc6-418b-aa4f-ef19b1b903e1] Running
	I0907 00:52:26.393886   46354 system_pods.go:61] "storage-provisioner" [f313e63f-6c39-4b81-86d1-8054fd6af338] Running
	I0907 00:52:26.393891   46354 system_pods.go:74] duration metric: took 19.879283ms to wait for pod list to return data ...
	I0907 00:52:26.393900   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:52:26.401474   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:52:26.401502   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:52:26.401512   46354 node_conditions.go:105] duration metric: took 7.606706ms to run NodePressure ...
	I0907 00:52:26.401529   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:52:26.811645   46354 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0907 00:52:26.817493   46354 retry.go:31] will retry after 177.884133ms: kubelet not initialised
	I0907 00:52:26.999917   46354 retry.go:31] will retry after 499.371742ms: kubelet not initialised
	I0907 00:52:27.504386   46354 retry.go:31] will retry after 692.030349ms: kubelet not initialised
	I0907 00:52:28.201498   46354 retry.go:31] will retry after 627.806419ms: kubelet not initialised
	I0907 00:52:25.358575   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.860612   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:25.229134   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:27.230538   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.729637   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:26.764040   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:29.264855   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:28.841483   46354 retry.go:31] will retry after 1.816521725s: kubelet not initialised
	I0907 00:52:30.664615   46354 retry.go:31] will retry after 1.888537042s: kubelet not initialised
	I0907 00:52:32.559591   46354 retry.go:31] will retry after 1.787314239s: kubelet not initialised
	I0907 00:52:30.358330   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.857719   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:32.229103   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.229797   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:31.265047   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:33.763354   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:34.353206   46354 retry.go:31] will retry after 5.20863166s: kubelet not initialised
	I0907 00:52:34.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:37.358005   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.229978   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.728934   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:36.264389   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:38.762232   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:39.567124   46354 retry.go:31] will retry after 8.04288108s: kubelet not initialised
	I0907 00:52:39.863004   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:42.359394   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.729770   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.236530   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:40.762994   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:43.263094   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.264328   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.616011   46354 retry.go:31] will retry after 4.959306281s: kubelet not initialised
	I0907 00:52:44.858665   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.359722   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:45.729067   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:48.228533   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:47.763985   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.263571   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.580975   46354 retry.go:31] will retry after 19.653399141s: kubelet not initialised
	I0907 00:52:49.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.360050   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.361428   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:50.229168   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.229310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.229581   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:52.263685   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:54.762390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.857835   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.357322   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.728575   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.228623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:56.762553   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:52:59.263070   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.357560   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.358151   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.228910   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.728870   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:01.264341   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:03.764046   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.858279   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:07.861484   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:05.729314   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.229765   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:06.263532   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:08.763318   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.241966   46354 kubeadm.go:787] kubelet initialised
	I0907 00:53:12.242006   46354 kubeadm.go:788] duration metric: took 45.430332167s waiting for restarted kubelet to initialise ...
	I0907 00:53:12.242016   46354 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:53:12.247545   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253242   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.253264   46354 pod_ready.go:81] duration metric: took 5.697075ms waiting for pod "coredns-5644d7b6d9-56l68" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.253276   46354 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258467   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.258489   46354 pod_ready.go:81] duration metric: took 5.206456ms waiting for pod "coredns-5644d7b6d9-wj2s6" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.258497   46354 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264371   46354 pod_ready.go:92] pod "etcd-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.264394   46354 pod_ready.go:81] duration metric: took 5.89143ms waiting for pod "etcd-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.264406   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269447   46354 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.269467   46354 pod_ready.go:81] duration metric: took 5.053466ms waiting for pod "kube-apiserver-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.269481   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638374   46354 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:12.638400   46354 pod_ready.go:81] duration metric: took 368.911592ms waiting for pod "kube-controller-manager-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:12.638413   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039158   46354 pod_ready.go:92] pod "kube-proxy-2d8pb" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.039183   46354 pod_ready.go:81] duration metric: took 400.763103ms waiting for pod "kube-proxy-2d8pb" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.039191   46354 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:10.359605   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:12.361679   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:10.729293   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.229130   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:11.263595   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.268640   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:13.439450   46354 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace has status "Ready":"True"
	I0907 00:53:13.439477   46354 pod_ready.go:81] duration metric: took 400.279988ms waiting for pod "kube-scheduler-old-k8s-version-940806" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:13.439486   46354 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	I0907 00:53:15.746303   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.747193   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:14.858056   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:16.860373   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:19.361777   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:15.730623   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:18.229790   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:17.763744   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.262360   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.246964   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.746507   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:21.361826   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.857891   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:20.729313   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:23.228479   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:22.263551   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:24.762509   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.246087   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:27.745946   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.858658   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.361105   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:25.732342   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:28.229971   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:26.763684   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.262971   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:29.746043   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.746133   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.857617   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.860863   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:30.728633   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:32.730094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:31.264742   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.764483   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:33.748648   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.246158   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.358908   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.361998   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:35.229141   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:37.729367   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:36.263505   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.264633   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:38.746190   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.751934   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:39.858993   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:41.860052   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.359421   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.228491   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:42.229143   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:44.229996   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:40.766539   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.264325   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:43.245475   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.245574   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.246524   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.857876   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.859569   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:46.230037   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:48.727940   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:45.763110   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:47.763211   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.264727   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:49.745339   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:51.746054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.859934   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:53.357432   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:50.729449   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.729731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.731191   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:52.763145   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.763847   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:54.246469   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.746034   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:55.357937   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.856743   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:57.227742   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.228654   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:56.764030   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.765416   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:58.746909   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.246396   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:53:59.858583   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:02.357694   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:04.357907   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.229565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.729229   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:01.263126   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.764100   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:03.745703   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:05.745994   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.858308   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:09.357561   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.229604   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.727738   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:06.262721   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.263088   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.264022   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:08.246673   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.246999   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.746105   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:11.358384   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:13.358491   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:10.729593   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.732429   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:12.762306   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.263152   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:14.746491   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.245728   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.361153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.860338   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:15.229785   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.730926   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.733515   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:17.763593   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.264199   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:19.247271   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:21.251269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:20.360652   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.860291   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.229545   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.729109   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:22.264956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:24.764699   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:23.746737   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.747269   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:25.357166   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.358248   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:26.729136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.226834   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:27.262945   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.763714   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:28.245784   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:30.245932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.745051   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:29.860752   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.357600   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.361871   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:31.227731   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:33.727721   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:32.262586   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.263485   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:34.745803   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.745877   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.858000   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.859206   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:35.729469   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.227947   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:36.763348   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:38.763533   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:39.245567   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.246549   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.859969   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.862293   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:40.228842   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:42.230064   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:44.732421   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:41.263587   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.762536   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:43.746104   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:46.247106   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.358648   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.858022   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:47.229847   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:49.729764   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:45.763352   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.263554   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:48.745911   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.746370   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.357129   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.357416   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.359626   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.228487   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:54.728565   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:50.762919   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:52.764740   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.262939   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:53.248337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:55.746300   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.858127   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.358102   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:56.730045   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.227094   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:57.263059   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:59.263696   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:54:58.247342   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:00.745494   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:02.748481   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.360153   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.360737   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.227937   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.235852   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:01.263956   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:03.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.246551   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.747587   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.858981   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:07.861146   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.729711   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.228310   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:05.764163   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:08.263381   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.263936   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.247504   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.745798   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.360810   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.859446   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:10.229240   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.728782   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.729856   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:12.763565   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.263530   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:14.746534   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.246569   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:15.356953   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.358790   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:16.732983   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.228136   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:17.264573   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.763137   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.745008   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.745932   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:19.858109   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:22.358258   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.228589   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.729147   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:21.763406   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.763580   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:23.746337   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.748262   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:24.860943   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.357823   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.729423   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:27.731209   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:25.764235   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.263390   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:28.254786   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.746056   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:29.859827   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:31.861387   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.862627   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.227830   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.227911   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:34.728680   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:30.762895   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:32.763333   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.262940   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:33.247352   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:35.247638   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.747011   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:36.356562   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:38.358379   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.227942   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.230445   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:37.264134   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:39.763848   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.245726   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.246951   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:40.858763   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.859176   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:41.729215   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.228235   46768 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:42.263784   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.762310   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:44.747834   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:46.748669   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.361972   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:47.861601   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:45.453504   46768 pod_ready.go:81] duration metric: took 4m0.000384981s waiting for pod "metrics-server-57f55c9bc5-s95n2" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:45.453536   46768 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:45.453557   46768 pod_ready.go:38] duration metric: took 4m14.103603262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:45.453586   46768 kubeadm.go:640] restartCluster took 4m33.861797616s
	W0907 00:55:45.453681   46768 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:55:45.453721   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:55:46.762627   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:48.764174   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:49.247771   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:51.747171   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:50.361591   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:52.362641   46833 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.550366   46833 pod_ready.go:81] duration metric: took 4m0.000125687s waiting for pod "metrics-server-57f55c9bc5-d7nxw" in "kube-system" namespace to be "Ready" ...
	E0907 00:55:53.550409   46833 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:55:53.550421   46833 pod_ready.go:38] duration metric: took 4m5.601345022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:55:53.550444   46833 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:55:53.550477   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:53.550553   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:53.601802   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:53.601823   46833 cri.go:89] found id: ""
	I0907 00:55:53.601831   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:53.601892   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.606465   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:53.606555   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:53.643479   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.643509   46833 cri.go:89] found id: ""
	I0907 00:55:53.643516   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:53.643562   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.648049   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:53.648101   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:53.679620   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:53.679648   46833 cri.go:89] found id: ""
	I0907 00:55:53.679658   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:53.679706   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.684665   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:53.684721   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:53.725282   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.725302   46833 cri.go:89] found id: ""
	I0907 00:55:53.725309   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:53.725364   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.729555   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:53.729627   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:53.761846   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:53.761875   46833 cri.go:89] found id: ""
	I0907 00:55:53.761883   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:53.761930   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.766451   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:53.766523   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:53.800099   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:53.800118   46833 cri.go:89] found id: ""
	I0907 00:55:53.800124   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:53.800168   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.804614   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:53.804676   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:53.841198   46833 cri.go:89] found id: ""
	I0907 00:55:53.841219   46833 logs.go:284] 0 containers: []
	W0907 00:55:53.841225   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:53.841230   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:53.841288   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:53.883044   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:53.883071   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:53.883077   46833 cri.go:89] found id: ""
	I0907 00:55:53.883085   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:53.883133   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.887172   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:53.891540   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:53.891566   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:53.944734   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:53.944765   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:53.979803   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:53.979832   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:54.015131   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:54.015159   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:54.062445   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:54.062478   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:54.097313   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:54.097343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:54.685400   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:54.685442   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:51.262853   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:53.764766   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.248875   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:56.746538   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:54.836523   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:54.836555   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:54.885972   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:54.886002   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:54.918966   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:54.919000   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:54.951966   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:54.951996   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:54.991382   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:54.991418   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:55.048526   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:55.048561   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:57.564574   46833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:55:57.579844   46833 api_server.go:72] duration metric: took 4m15.68090954s to wait for apiserver process to appear ...
	I0907 00:55:57.579867   46833 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:55:57.579899   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:55:57.579963   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:55:57.619205   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:57.619225   46833 cri.go:89] found id: ""
	I0907 00:55:57.619235   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:55:57.619287   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.623884   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:55:57.623962   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:55:57.653873   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:57.653899   46833 cri.go:89] found id: ""
	I0907 00:55:57.653907   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:55:57.653967   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.658155   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:55:57.658219   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:55:57.688169   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:57.688195   46833 cri.go:89] found id: ""
	I0907 00:55:57.688203   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:55:57.688256   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.692208   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:55:57.692274   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:55:57.722477   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:57.722498   46833 cri.go:89] found id: ""
	I0907 00:55:57.722505   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:55:57.722548   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.726875   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:55:57.726926   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:55:57.768681   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:57.768709   46833 cri.go:89] found id: ""
	I0907 00:55:57.768718   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:55:57.768768   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.773562   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:55:57.773654   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:55:57.806133   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:57.806158   46833 cri.go:89] found id: ""
	I0907 00:55:57.806166   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:55:57.806222   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.810401   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:55:57.810446   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:55:57.840346   46833 cri.go:89] found id: ""
	I0907 00:55:57.840371   46833 logs.go:284] 0 containers: []
	W0907 00:55:57.840379   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:55:57.840384   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:55:57.840435   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:55:57.869978   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:57.869998   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:57.870002   46833 cri.go:89] found id: ""
	I0907 00:55:57.870008   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:55:57.870052   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.874945   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:55:57.878942   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:55:57.878964   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:55:58.015009   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:55:58.015035   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:55:58.063331   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:55:58.063365   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:55:58.098316   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:55:58.098343   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:55:58.140312   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:55:58.140342   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:55:58.170471   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:55:58.170499   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:55:58.217775   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:55:58.217804   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:55:58.275681   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:55:58.275717   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:55:58.323629   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:55:58.323663   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:55:58.360608   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:55:58.360636   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:55:58.397158   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:55:58.397193   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:55:58.435395   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:55:58.435425   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:55:59.023632   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:55:59.023687   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:55:55.767692   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:58.262808   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:00.263787   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:55:59.246042   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.746441   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:01.540667   46833 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0907 00:56:01.548176   46833 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0907 00:56:01.549418   46833 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:01.549443   46833 api_server.go:131] duration metric: took 3.969568684s to wait for apiserver health ...
	I0907 00:56:01.549451   46833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:01.549474   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:01.549546   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:01.579945   46833 cri.go:89] found id: "3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:01.579975   46833 cri.go:89] found id: ""
	I0907 00:56:01.579985   46833 logs.go:284] 1 containers: [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c]
	I0907 00:56:01.580038   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.584609   46833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:01.584673   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:01.628626   46833 cri.go:89] found id: "3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:01.628647   46833 cri.go:89] found id: ""
	I0907 00:56:01.628656   46833 logs.go:284] 1 containers: [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0]
	I0907 00:56:01.628711   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.633293   46833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:01.633362   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:01.663898   46833 cri.go:89] found id: "855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.663923   46833 cri.go:89] found id: ""
	I0907 00:56:01.663932   46833 logs.go:284] 1 containers: [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc]
	I0907 00:56:01.663994   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.668130   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:01.668198   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:01.699021   46833 cri.go:89] found id: "9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.699045   46833 cri.go:89] found id: ""
	I0907 00:56:01.699055   46833 logs.go:284] 1 containers: [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213]
	I0907 00:56:01.699107   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.703470   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:01.703536   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:01.740360   46833 cri.go:89] found id: "6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:01.740387   46833 cri.go:89] found id: ""
	I0907 00:56:01.740396   46833 logs.go:284] 1 containers: [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3]
	I0907 00:56:01.740450   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.747366   46833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:01.747445   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:01.783175   46833 cri.go:89] found id: "22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.783218   46833 cri.go:89] found id: ""
	I0907 00:56:01.783226   46833 logs.go:284] 1 containers: [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168]
	I0907 00:56:01.783267   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.787565   46833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:01.787628   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:01.822700   46833 cri.go:89] found id: ""
	I0907 00:56:01.822730   46833 logs.go:284] 0 containers: []
	W0907 00:56:01.822740   46833 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:01.822747   46833 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:01.822818   46833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:01.853909   46833 cri.go:89] found id: "3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:01.853934   46833 cri.go:89] found id: "9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:01.853938   46833 cri.go:89] found id: ""
	I0907 00:56:01.853945   46833 logs.go:284] 2 containers: [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25]
	I0907 00:56:01.853990   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.858209   46833 ssh_runner.go:195] Run: which crictl
	I0907 00:56:01.862034   46833 logs.go:123] Gathering logs for coredns [855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc] ...
	I0907 00:56:01.862053   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855a29ec437beec087969aae2a0cf11e3f9eb63501d1adf6d6333f95c5cc67cc"
	I0907 00:56:01.902881   46833 logs.go:123] Gathering logs for kube-scheduler [9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213] ...
	I0907 00:56:01.902915   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9177fe24226fe71fd3d4db622f9139c89ade1b1e2dbce5781057bc0bb1631213"
	I0907 00:56:01.937846   46833 logs.go:123] Gathering logs for kube-controller-manager [22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168] ...
	I0907 00:56:01.937882   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22bdcb2b7b02d7be959dd27fcff3621cff01d3bd89b799eaa5eb1b9f109a8168"
	I0907 00:56:01.993495   46833 logs.go:123] Gathering logs for storage-provisioner [9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25] ...
	I0907 00:56:01.993526   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9094ebc4a03d9557da737132322c756422a62cb4ec528970b90c091e03a5ce25"
	I0907 00:56:02.029773   46833 logs.go:123] Gathering logs for container status ...
	I0907 00:56:02.029810   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:02.076180   46833 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:02.076210   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:02.133234   46833 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:02.133268   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:02.278183   46833 logs.go:123] Gathering logs for etcd [3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0] ...
	I0907 00:56:02.278209   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fee1540272d13b14acc0cd94342289baec24aadd54bed710715d5f2055ae7b0"
	I0907 00:56:02.325096   46833 logs.go:123] Gathering logs for kube-proxy [6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3] ...
	I0907 00:56:02.325125   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6af4cd8e3e58757c1c45b8b3ace3c5c70aef6fc6a5793464b3723ae99f4301f3"
	I0907 00:56:02.362517   46833 logs.go:123] Gathering logs for storage-provisioner [3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71] ...
	I0907 00:56:02.362542   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e19fc62694d0cc51b4f4862df3d78158ece9f16dc72a32443ec7337118e9e71"
	I0907 00:56:02.393393   46833 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:02.393430   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:02.950480   46833 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:02.950521   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:02.967628   46833 logs.go:123] Gathering logs for kube-apiserver [3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c] ...
	I0907 00:56:02.967658   46833 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bfeea0ca797bdf3d0d5f0467291e07ea63c6a3cf3bf9e96c0a1a4112758958c"
	I0907 00:56:05.533216   46833 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:05.533249   46833 system_pods.go:61] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.533257   46833 system_pods.go:61] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.533264   46833 system_pods.go:61] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.533271   46833 system_pods.go:61] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.533276   46833 system_pods.go:61] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.533283   46833 system_pods.go:61] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.533292   46833 system_pods.go:61] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.533305   46833 system_pods.go:61] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.533315   46833 system_pods.go:74] duration metric: took 3.983859289s to wait for pod list to return data ...
	I0907 00:56:05.533327   46833 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:05.536806   46833 default_sa.go:45] found service account: "default"
	I0907 00:56:05.536833   46833 default_sa.go:55] duration metric: took 3.496147ms for default service account to be created ...
	I0907 00:56:05.536842   46833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:05.543284   46833 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:05.543310   46833 system_pods.go:89] "coredns-5dd5756b68-vrgm9" [0cba67a0-cbe9-4ad9-b0ca-b52a9e6542e9] Running
	I0907 00:56:05.543318   46833 system_pods.go:89] "etcd-embed-certs-546209" [8912d861-7015-4a84-b571-4994fc58a45c] Running
	I0907 00:56:05.543325   46833 system_pods.go:89] "kube-apiserver-embed-certs-546209" [0b67b20e-3ee5-46eb-8657-e4de4ea391e5] Running
	I0907 00:56:05.543332   46833 system_pods.go:89] "kube-controller-manager-embed-certs-546209" [15eed5a0-3403-45e9-80d2-bc4012e9b028] Running
	I0907 00:56:05.543337   46833 system_pods.go:89] "kube-proxy-47255" [6e6b85b5-8bdd-4d0d-8424-1e7276b263c0] Running
	I0907 00:56:05.543344   46833 system_pods.go:89] "kube-scheduler-embed-certs-546209" [2d1e82e0-a0ac-4498-bd9c-399566bd9c99] Running
	I0907 00:56:05.543355   46833 system_pods.go:89] "metrics-server-57f55c9bc5-d7nxw" [92e557f4-3c56-49f4-931c-0e64fa3cb1df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:05.543367   46833 system_pods.go:89] "storage-provisioner" [a741bf5a-bd74-49af-9920-2ba0a36a5d01] Running
	I0907 00:56:05.543377   46833 system_pods.go:126] duration metric: took 6.528914ms to wait for k8s-apps to be running ...
	I0907 00:56:05.543391   46833 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:05.543437   46833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:05.559581   46833 system_svc.go:56] duration metric: took 16.174514ms WaitForService to wait for kubelet.
	I0907 00:56:05.559613   46833 kubeadm.go:581] duration metric: took 4m23.660681176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:05.559638   46833 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:05.564521   46833 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:05.564552   46833 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:05.564566   46833 node_conditions.go:105] duration metric: took 4.922449ms to run NodePressure ...
	I0907 00:56:05.564579   46833 start.go:228] waiting for startup goroutines ...
	I0907 00:56:05.564589   46833 start.go:233] waiting for cluster config update ...
	I0907 00:56:05.564609   46833 start.go:242] writing updated cluster config ...
	I0907 00:56:05.564968   46833 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:05.618906   46833 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:05.620461   46833 out.go:177] * Done! kubectl is now configured to use "embed-certs-546209" cluster and "default" namespace by default
	I0907 00:56:02.763702   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:05.264729   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:04.246390   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:06.246925   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:07.762598   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:09.764581   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:08.746379   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:11.246764   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.263747   47297 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:12.364712   47297 pod_ready.go:81] duration metric: took 4m0.00109115s waiting for pod "metrics-server-57f55c9bc5-2w2m6" in "kube-system" namespace to be "Ready" ...
	E0907 00:56:12.364763   47297 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:56:12.364776   47297 pod_ready.go:38] duration metric: took 4m3.209409487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:12.364799   47297 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:12.364833   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:12.364891   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:12.416735   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:12.416760   47297 cri.go:89] found id: ""
	I0907 00:56:12.416767   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:12.416818   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.423778   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:12.423849   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:12.465058   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.465086   47297 cri.go:89] found id: ""
	I0907 00:56:12.465095   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:12.465152   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.471730   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:12.471793   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:12.508984   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.509005   47297 cri.go:89] found id: ""
	I0907 00:56:12.509017   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:12.509073   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.513689   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:12.513745   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:12.550233   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:12.550257   47297 cri.go:89] found id: ""
	I0907 00:56:12.550266   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:12.550325   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.556588   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:12.556665   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:12.598826   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:12.598853   47297 cri.go:89] found id: ""
	I0907 00:56:12.598862   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:12.598913   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.603710   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:12.603778   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:12.645139   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:12.645169   47297 cri.go:89] found id: ""
	I0907 00:56:12.645179   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:12.645236   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.650685   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:12.650755   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:12.686256   47297 cri.go:89] found id: ""
	I0907 00:56:12.686284   47297 logs.go:284] 0 containers: []
	W0907 00:56:12.686291   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:12.686297   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:12.686349   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:12.719614   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.719638   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:12.719645   47297 cri.go:89] found id: ""
	I0907 00:56:12.719655   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:12.719713   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.724842   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:12.728880   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:12.728899   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:12.771051   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:12.771081   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:12.812110   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:12.812140   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:12.847819   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:12.847845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:13.436674   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:13.436711   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:13.454385   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:13.454425   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:13.617809   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:13.617838   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:13.652209   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:13.652239   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:13.683939   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:13.683977   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:13.730116   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:13.730151   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:13.763253   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:13.763278   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:13.804890   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:13.804918   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:13.861822   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:13.861856   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.242461   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.788701806s)
	I0907 00:56:17.242546   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:17.259241   46768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:56:17.268943   46768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:56:17.278094   46768 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:56:17.278138   46768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0907 00:56:17.342868   46768 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0907 00:56:17.342981   46768 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:56:17.519943   46768 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:56:17.520089   46768 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:56:17.520214   46768 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:56:17.714902   46768 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:56:13.247487   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:15.746162   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.748049   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:17.716739   46768 out.go:204]   - Generating certificates and keys ...
	I0907 00:56:17.716894   46768 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:56:17.717007   46768 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:56:17.717113   46768 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:56:17.717361   46768 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:56:17.717892   46768 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:56:17.718821   46768 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:56:17.719502   46768 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:56:17.719996   46768 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:56:17.720644   46768 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:56:17.721254   46768 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:56:17.721832   46768 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:56:17.721911   46768 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:56:17.959453   46768 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:56:18.029012   46768 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:56:18.146402   46768 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:56:18.309148   46768 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:56:18.309726   46768 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:56:18.312628   46768 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:56:18.315593   46768 out.go:204]   - Booting up control plane ...
	I0907 00:56:18.315744   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:56:18.315870   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:56:18.317157   46768 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:56:18.336536   46768 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:56:18.336947   46768 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:56:18.337042   46768 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0907 00:56:18.472759   46768 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:56:16.415279   47297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:16.431021   47297 api_server.go:72] duration metric: took 4m14.6757965s to wait for apiserver process to appear ...
	I0907 00:56:16.431047   47297 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:16.431086   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:16.431144   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:16.474048   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:16.474075   47297 cri.go:89] found id: ""
	I0907 00:56:16.474085   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:16.474141   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.478873   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:16.478956   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:16.512799   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.512817   47297 cri.go:89] found id: ""
	I0907 00:56:16.512824   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:16.512880   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.518717   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:16.518812   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:16.553996   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:16.554016   47297 cri.go:89] found id: ""
	I0907 00:56:16.554023   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:16.554066   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.559358   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:16.559422   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:16.598717   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:16.598739   47297 cri.go:89] found id: ""
	I0907 00:56:16.598746   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:16.598821   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.603704   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:16.603766   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:16.646900   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:16.646928   47297 cri.go:89] found id: ""
	I0907 00:56:16.646937   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:16.646995   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.651216   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:16.651287   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:16.681334   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:16.681361   47297 cri.go:89] found id: ""
	I0907 00:56:16.681374   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:16.681429   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.685963   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:16.686028   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:16.720214   47297 cri.go:89] found id: ""
	I0907 00:56:16.720243   47297 logs.go:284] 0 containers: []
	W0907 00:56:16.720253   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:16.720259   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:16.720316   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:16.756411   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:16.756437   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:16.756444   47297 cri.go:89] found id: ""
	I0907 00:56:16.756452   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:16.756512   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.762211   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:16.767635   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:16.767659   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:16.784092   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:16.784122   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:16.936817   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:16.936845   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:16.979426   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:16.979455   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:17.009878   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:17.009912   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:17.048086   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:17.048113   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:17.103114   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:17.103156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:17.139125   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:17.139163   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:17.181560   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:17.181588   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:17.224815   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:17.224841   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:17.299438   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:17.299474   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:17.355165   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:17.355197   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:17.403781   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:17.403809   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:20.491060   47297 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8444/healthz ...
	I0907 00:56:20.498573   47297 api_server.go:279] https://192.168.39.96:8444/healthz returned 200:
	ok
	I0907 00:56:20.501753   47297 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:20.501774   47297 api_server.go:131] duration metric: took 4.070720466s to wait for apiserver health ...
	I0907 00:56:20.501782   47297 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:20.501807   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0907 00:56:20.501856   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0907 00:56:20.545524   47297 cri.go:89] found id: "891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:20.545550   47297 cri.go:89] found id: ""
	I0907 00:56:20.545560   47297 logs.go:284] 1 containers: [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0]
	I0907 00:56:20.545616   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.552051   47297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0907 00:56:20.552120   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0907 00:56:20.593019   47297 cri.go:89] found id: "e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:20.593041   47297 cri.go:89] found id: ""
	I0907 00:56:20.593049   47297 logs.go:284] 1 containers: [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13]
	I0907 00:56:20.593104   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.598430   47297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0907 00:56:20.598500   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0907 00:56:20.639380   47297 cri.go:89] found id: "d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:20.639407   47297 cri.go:89] found id: ""
	I0907 00:56:20.639417   47297 logs.go:284] 1 containers: [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08]
	I0907 00:56:20.639507   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.645270   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0907 00:56:20.645342   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0907 00:56:20.247030   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:22.247132   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:20.684338   47297 cri.go:89] found id: "a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:20.684368   47297 cri.go:89] found id: ""
	I0907 00:56:20.684378   47297 logs.go:284] 1 containers: [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02]
	I0907 00:56:20.684438   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.689465   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0907 00:56:20.689528   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0907 00:56:20.727854   47297 cri.go:89] found id: "0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.727879   47297 cri.go:89] found id: ""
	I0907 00:56:20.727887   47297 logs.go:284] 1 containers: [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad]
	I0907 00:56:20.727938   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.733320   47297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0907 00:56:20.733389   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0907 00:56:20.776584   47297 cri.go:89] found id: "0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:20.776607   47297 cri.go:89] found id: ""
	I0907 00:56:20.776614   47297 logs.go:284] 1 containers: [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704]
	I0907 00:56:20.776659   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.781745   47297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0907 00:56:20.781822   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0907 00:56:20.817720   47297 cri.go:89] found id: ""
	I0907 00:56:20.817746   47297 logs.go:284] 0 containers: []
	W0907 00:56:20.817756   47297 logs.go:286] No container was found matching "kindnet"
	I0907 00:56:20.817763   47297 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0907 00:56:20.817819   47297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0907 00:56:20.857693   47297 cri.go:89] found id: "a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.857716   47297 cri.go:89] found id: "cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.857723   47297 cri.go:89] found id: ""
	I0907 00:56:20.857732   47297 logs.go:284] 2 containers: [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c]
	I0907 00:56:20.857788   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.862242   47297 ssh_runner.go:195] Run: which crictl
	I0907 00:56:20.866469   47297 logs.go:123] Gathering logs for kube-proxy [0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad] ...
	I0907 00:56:20.866489   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0672903c9cfb10816cd827e70d6bd270e2e301d06503718b30648ff1a65951ad"
	I0907 00:56:20.907476   47297 logs.go:123] Gathering logs for storage-provisioner [a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0] ...
	I0907 00:56:20.907514   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c3d8a195ffdfb6aae9c32b5bc03a6612ee690d2883692abe63b10300ad22d0"
	I0907 00:56:20.946383   47297 logs.go:123] Gathering logs for storage-provisioner [cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c] ...
	I0907 00:56:20.946418   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdcb5afe48490d83bb3215a6ac78a81b0095f9c897bcca5724fb2b74413a3a1c"
	I0907 00:56:20.983830   47297 logs.go:123] Gathering logs for CRI-O ...
	I0907 00:56:20.983858   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0907 00:56:21.572473   47297 logs.go:123] Gathering logs for container status ...
	I0907 00:56:21.572524   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0907 00:56:21.626465   47297 logs.go:123] Gathering logs for kubelet ...
	I0907 00:56:21.626496   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0907 00:56:21.692455   47297 logs.go:123] Gathering logs for dmesg ...
	I0907 00:56:21.692491   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0907 00:56:21.712600   47297 logs.go:123] Gathering logs for describe nodes ...
	I0907 00:56:21.712632   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0907 00:56:21.855914   47297 logs.go:123] Gathering logs for kube-apiserver [891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0] ...
	I0907 00:56:21.855948   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 891a5075955e0c69762ff8a07f3b020859455a929d609d2f8bc9e57d21cd3df0"
	I0907 00:56:21.909035   47297 logs.go:123] Gathering logs for etcd [e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13] ...
	I0907 00:56:21.909068   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c2c9d202be91e6c8cb5fc8313f82376f57c1527a2e71b55d087d88094a13"
	I0907 00:56:21.961286   47297 logs.go:123] Gathering logs for coredns [d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08] ...
	I0907 00:56:21.961317   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d28e9dadd44dacef2db14419edf4fbad3604273576d2e5cbfd728ff8d4c5ab08"
	I0907 00:56:22.002150   47297 logs.go:123] Gathering logs for kube-scheduler [a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02] ...
	I0907 00:56:22.002177   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0f6bff3368821d0170835d31bd9581d293349435089c71b418460b8db94df02"
	I0907 00:56:22.035129   47297 logs.go:123] Gathering logs for kube-controller-manager [0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704] ...
	I0907 00:56:22.035156   47297 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0692c75701ac71fa7f589039f7068a99397925e0508ac2b6dac1b73fe6725704"
	I0907 00:56:24.592419   47297 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:24.592455   47297 system_pods.go:61] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.592460   47297 system_pods.go:61] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.592464   47297 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.592469   47297 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.592473   47297 system_pods.go:61] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.592477   47297 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.592483   47297 system_pods.go:61] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.592489   47297 system_pods.go:61] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.592494   47297 system_pods.go:74] duration metric: took 4.090707422s to wait for pod list to return data ...
	I0907 00:56:24.592501   47297 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:24.596106   47297 default_sa.go:45] found service account: "default"
	I0907 00:56:24.596127   47297 default_sa.go:55] duration metric: took 3.621408ms for default service account to be created ...
	I0907 00:56:24.596134   47297 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:24.601998   47297 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:24.602021   47297 system_pods.go:89] "coredns-5dd5756b68-wdnpc" [98e46ef4-ee2b-4d80-9c3c-b1d675142c7f] Running
	I0907 00:56:24.602026   47297 system_pods.go:89] "etcd-default-k8s-diff-port-773466" [f2d0fe7e-ef8d-4bd6-bbe6-683c026c1aa2] Running
	I0907 00:56:24.602032   47297 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-773466" [899f3718-c532-4137-96ae-dc39c2ed9e97] Running
	I0907 00:56:24.602037   47297 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-773466" [80180576-94bd-43c0-a83b-ba48e6f0a056] Running
	I0907 00:56:24.602041   47297 system_pods.go:89] "kube-proxy-5bh7n" [28b4df63-f3db-4544-ab5d-54a021be48bf] Running
	I0907 00:56:24.602046   47297 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-773466" [f383f2e1-9d1e-4e07-9a8e-b2b2e4cb1879] Running
	I0907 00:56:24.602054   47297 system_pods.go:89] "metrics-server-57f55c9bc5-2w2m6" [70d0ed87-ab6c-4f43-b12d-4730244d67db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:24.602063   47297 system_pods.go:89] "storage-provisioner" [54e9c6d3-3c07-4afe-94cd-e57f83ba3152] Running
	I0907 00:56:24.602069   47297 system_pods.go:126] duration metric: took 5.931212ms to wait for k8s-apps to be running ...
	I0907 00:56:24.602076   47297 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:24.602116   47297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:24.623704   47297 system_svc.go:56] duration metric: took 21.617229ms WaitForService to wait for kubelet.
	I0907 00:56:24.623734   47297 kubeadm.go:581] duration metric: took 4m22.868513281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:24.623754   47297 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:24.628408   47297 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:24.628435   47297 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:24.628444   47297 node_conditions.go:105] duration metric: took 4.686272ms to run NodePressure ...
	I0907 00:56:24.628454   47297 start.go:228] waiting for startup goroutines ...
	I0907 00:56:24.628460   47297 start.go:233] waiting for cluster config update ...
	I0907 00:56:24.628469   47297 start.go:242] writing updated cluster config ...
	I0907 00:56:24.628735   47297 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:24.683237   47297 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:24.686336   47297 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-773466" cluster and "default" namespace by default
	I0907 00:56:26.977381   46768 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503998 seconds
	I0907 00:56:26.977624   46768 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:56:27.000116   46768 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:56:27.541598   46768 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:56:27.541809   46768 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-321164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:56:28.055045   46768 kubeadm.go:322] [bootstrap-token] Using token: 7x1950.9u417zcplp1q0xai
	I0907 00:56:24.247241   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:26.773163   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:28.056582   46768 out.go:204]   - Configuring RBAC rules ...
	I0907 00:56:28.056725   46768 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:56:28.065256   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:56:28.075804   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:56:28.081996   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:56:28.090825   46768 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:56:28.097257   46768 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:56:28.114787   46768 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:56:28.337001   46768 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:56:28.476411   46768 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:56:28.479682   46768 kubeadm.go:322] 
	I0907 00:56:28.479784   46768 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:56:28.479799   46768 kubeadm.go:322] 
	I0907 00:56:28.479898   46768 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:56:28.479912   46768 kubeadm.go:322] 
	I0907 00:56:28.479943   46768 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:56:28.480046   46768 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:56:28.480143   46768 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:56:28.480163   46768 kubeadm.go:322] 
	I0907 00:56:28.480343   46768 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0907 00:56:28.480361   46768 kubeadm.go:322] 
	I0907 00:56:28.480431   46768 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:56:28.480450   46768 kubeadm.go:322] 
	I0907 00:56:28.480544   46768 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:56:28.480656   46768 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:56:28.480783   46768 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:56:28.480796   46768 kubeadm.go:322] 
	I0907 00:56:28.480924   46768 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:56:28.481024   46768 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:56:28.481034   46768 kubeadm.go:322] 
	I0907 00:56:28.481117   46768 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481203   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:56:28.481223   46768 kubeadm.go:322] 	--control-plane 
	I0907 00:56:28.481226   46768 kubeadm.go:322] 
	I0907 00:56:28.481346   46768 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:56:28.481355   46768 kubeadm.go:322] 
	I0907 00:56:28.481453   46768 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7x1950.9u417zcplp1q0xai \
	I0907 00:56:28.481572   46768 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:56:28.482216   46768 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:56:28.482238   46768 cni.go:84] Creating CNI manager for ""
	I0907 00:56:28.482248   46768 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:56:28.484094   46768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:56:28.485597   46768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:56:28.537400   46768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:56:28.577654   46768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:56:28.577734   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.577747   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=no-preload-321164 minikube.k8s.io/updated_at=2023_09_07T00_56_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:28.909178   46768 ops.go:34] apiserver oom_adj: -16
	I0907 00:56:28.920821   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.027812   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.627489   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:30.127554   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:29.246606   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:31.746291   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:30.627315   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.127759   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:31.627183   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.127488   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:32.627464   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.126850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.626901   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.126917   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:34.626850   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:35.127788   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:33.747054   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.747536   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:35.627454   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.126916   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:36.626926   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.126845   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:37.627579   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.126885   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:38.627849   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.127371   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:39.627929   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.127775   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.627392   46768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:56:40.760535   46768 kubeadm.go:1081] duration metric: took 12.182860946s to wait for elevateKubeSystemPrivileges.
	I0907 00:56:40.760574   46768 kubeadm.go:406] StartCluster complete in 5m29.209699324s
	I0907 00:56:40.760594   46768 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.760690   46768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:56:40.762820   46768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:56:40.763132   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:56:40.763152   46768 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:56:40.763245   46768 addons.go:69] Setting storage-provisioner=true in profile "no-preload-321164"
	I0907 00:56:40.763251   46768 addons.go:69] Setting default-storageclass=true in profile "no-preload-321164"
	I0907 00:56:40.763263   46768 addons.go:231] Setting addon storage-provisioner=true in "no-preload-321164"
	W0907 00:56:40.763271   46768 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:56:40.763272   46768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-321164"
	I0907 00:56:40.763314   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763357   46768 config.go:182] Loaded profile config "no-preload-321164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:56:40.763404   46768 addons.go:69] Setting metrics-server=true in profile "no-preload-321164"
	I0907 00:56:40.763421   46768 addons.go:231] Setting addon metrics-server=true in "no-preload-321164"
	W0907 00:56:40.763428   46768 addons.go:240] addon metrics-server should already be in state true
	I0907 00:56:40.763464   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.763718   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763747   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763772   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763793   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.763811   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.763833   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.781727   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0907 00:56:40.781738   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0907 00:56:40.781741   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0907 00:56:40.782188   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782279   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782332   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.782702   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782724   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782856   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782873   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.782879   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.782894   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.783096   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783306   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783354   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.783531   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.783686   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783717   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.783905   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.783949   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.801244   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0907 00:56:40.801534   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0907 00:56:40.801961   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802064   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.802509   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802529   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802673   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.802689   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.802942   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803153   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.803218   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.803365   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.804775   46768 addons.go:231] Setting addon default-storageclass=true in "no-preload-321164"
	W0907 00:56:40.804798   46768 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:56:40.804828   46768 host.go:66] Checking if "no-preload-321164" exists ...
	I0907 00:56:40.805191   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.805490   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.807809   46768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:56:40.806890   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.809154   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.809188   46768 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:40.809199   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:56:40.809215   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809249   46768 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:56:40.810543   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:56:40.810557   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:56:40.810570   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.809485   46768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-321164" context rescaled to 1 replicas
	I0907 00:56:40.810637   46768 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.125 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:56:40.813528   46768 out.go:177] * Verifying Kubernetes components...
	I0907 00:56:38.246743   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.747015   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:40.814976   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:40.817948   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818029   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.818080   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818100   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818117   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818137   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818156   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.818175   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.818212   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818282   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.818348   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818462   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.818472   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.818676   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.827224   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0907 00:56:40.827578   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.828106   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.828122   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.828464   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.829012   46768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:56:40.829043   46768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:56:40.843423   46768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0907 00:56:40.843768   46768 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:56:40.844218   46768 main.go:141] libmachine: Using API Version  1
	I0907 00:56:40.844236   46768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:56:40.844529   46768 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:56:40.844735   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetState
	I0907 00:56:40.846265   46768 main.go:141] libmachine: (no-preload-321164) Calling .DriverName
	I0907 00:56:40.846489   46768 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:40.846506   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:56:40.846525   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHHostname
	I0907 00:56:40.849325   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849666   46768 main.go:141] libmachine: (no-preload-321164) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:da:68", ip: ""} in network mk-no-preload-321164: {Iface:virbr1 ExpiryTime:2023-09-07 01:42:42 +0000 UTC Type:0 Mac:52:54:00:eb:da:68 Iaid: IPaddr:192.168.61.125 Prefix:24 Hostname:no-preload-321164 Clientid:01:52:54:00:eb:da:68}
	I0907 00:56:40.849704   46768 main.go:141] libmachine: (no-preload-321164) DBG | domain no-preload-321164 has defined IP address 192.168.61.125 and MAC address 52:54:00:eb:da:68 in network mk-no-preload-321164
	I0907 00:56:40.849897   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHPort
	I0907 00:56:40.850103   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHKeyPath
	I0907 00:56:40.850251   46768 main.go:141] libmachine: (no-preload-321164) Calling .GetSSHUsername
	I0907 00:56:40.850397   46768 sshutil.go:53] new ssh client: &{IP:192.168.61.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/no-preload-321164/id_rsa Username:docker}
	I0907 00:56:40.965966   46768 node_ready.go:35] waiting up to 6m0s for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.966030   46768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:56:40.997127   46768 node_ready.go:49] node "no-preload-321164" has status "Ready":"True"
	I0907 00:56:40.997149   46768 node_ready.go:38] duration metric: took 31.151467ms waiting for node "no-preload-321164" to be "Ready" ...
	I0907 00:56:40.997158   46768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:41.010753   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:56:41.011536   46768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:41.022410   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:56:41.022431   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:56:41.051599   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:56:41.119566   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:56:41.119594   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:56:41.228422   46768 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:41.228443   46768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:56:41.321420   46768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:56:42.776406   46768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810334575s)
	I0907 00:56:42.776435   46768 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0907 00:56:43.385184   46768 pod_ready.go:102] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:43.446190   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435398332s)
	I0907 00:56:43.446240   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.446248   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3946112s)
	I0907 00:56:43.446255   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449355   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449362   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449377   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.449389   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.449406   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.449732   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.449771   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.449787   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450189   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450216   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.450653   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.450672   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.450682   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.450691   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451532   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.451597   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451619   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451635   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.451648   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.451869   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.451885   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.451895   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689511   46768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.368045812s)
	I0907 00:56:43.689565   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.689579   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.689952   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.689963   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.689974   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.689991   46768 main.go:141] libmachine: Making call to close driver server
	I0907 00:56:43.690001   46768 main.go:141] libmachine: (no-preload-321164) Calling .Close
	I0907 00:56:43.690291   46768 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:56:43.690307   46768 main.go:141] libmachine: (no-preload-321164) DBG | Closing plugin on server side
	I0907 00:56:43.690309   46768 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:56:43.690322   46768 addons.go:467] Verifying addon metrics-server=true in "no-preload-321164"
	I0907 00:56:43.693105   46768 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:56:43.694562   46768 addons.go:502] enable addons completed in 2.931409197s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:56:45.310723   46768 pod_ready.go:92] pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.310742   46768 pod_ready.go:81] duration metric: took 4.299181671s waiting for pod "coredns-5dd5756b68-8tnp7" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.310753   46768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316350   46768 pod_ready.go:92] pod "etcd-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.316373   46768 pod_ready.go:81] duration metric: took 5.614264ms waiting for pod "etcd-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.316385   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321183   46768 pod_ready.go:92] pod "kube-apiserver-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.321205   46768 pod_ready.go:81] duration metric: took 4.811919ms waiting for pod "kube-apiserver-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.321216   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326279   46768 pod_ready.go:92] pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.326297   46768 pod_ready.go:81] duration metric: took 5.0741ms waiting for pod "kube-controller-manager-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.326308   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332665   46768 pod_ready.go:92] pod "kube-proxy-st6n8" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.332687   46768 pod_ready.go:81] duration metric: took 6.372253ms waiting for pod "kube-proxy-st6n8" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.332697   46768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708023   46768 pod_ready.go:92] pod "kube-scheduler-no-preload-321164" in "kube-system" namespace has status "Ready":"True"
	I0907 00:56:45.708044   46768 pod_ready.go:81] duration metric: took 375.339873ms waiting for pod "kube-scheduler-no-preload-321164" in "kube-system" namespace to be "Ready" ...
	I0907 00:56:45.708051   46768 pod_ready.go:38] duration metric: took 4.710884592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:56:45.708065   46768 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:56:45.708106   46768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:56:45.725929   46768 api_server.go:72] duration metric: took 4.915250734s to wait for apiserver process to appear ...
	I0907 00:56:45.725950   46768 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:56:45.725964   46768 api_server.go:253] Checking apiserver healthz at https://192.168.61.125:8443/healthz ...
	I0907 00:56:45.731998   46768 api_server.go:279] https://192.168.61.125:8443/healthz returned 200:
	ok
	I0907 00:56:45.733492   46768 api_server.go:141] control plane version: v1.28.1
	I0907 00:56:45.733507   46768 api_server.go:131] duration metric: took 7.552661ms to wait for apiserver health ...
	I0907 00:56:45.733514   46768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:56:45.911337   46768 system_pods.go:59] 8 kube-system pods found
	I0907 00:56:45.911374   46768 system_pods.go:61] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:45.911383   46768 system_pods.go:61] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:45.911389   46768 system_pods.go:61] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:45.911397   46768 system_pods.go:61] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:45.911403   46768 system_pods.go:61] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:45.911410   46768 system_pods.go:61] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:45.911421   46768 system_pods.go:61] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:45.911435   46768 system_pods.go:61] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:45.911443   46768 system_pods.go:74] duration metric: took 177.923008ms to wait for pod list to return data ...
	I0907 00:56:45.911455   46768 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:56:46.107121   46768 default_sa.go:45] found service account: "default"
	I0907 00:56:46.107149   46768 default_sa.go:55] duration metric: took 195.685496ms for default service account to be created ...
	I0907 00:56:46.107159   46768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:56:46.314551   46768 system_pods.go:86] 8 kube-system pods found
	I0907 00:56:46.314588   46768 system_pods.go:89] "coredns-5dd5756b68-8tnp7" [1d896961-1b2c-48fd-b9dd-a40a95174fed] Running
	I0907 00:56:46.314596   46768 system_pods.go:89] "etcd-no-preload-321164" [84b8dd41-f676-48e0-b231-c27178cc0345] Running
	I0907 00:56:46.314603   46768 system_pods.go:89] "kube-apiserver-no-preload-321164" [a5a3cde8-128a-411d-9970-d3811ba22c5c] Running
	I0907 00:56:46.314611   46768 system_pods.go:89] "kube-controller-manager-no-preload-321164" [81614893-1ef1-4246-84ad-d4a2d9dedff8] Running
	I0907 00:56:46.314618   46768 system_pods.go:89] "kube-proxy-st6n8" [8f3aa3f2-223b-43de-b0e9-987958c50108] Running
	I0907 00:56:46.314624   46768 system_pods.go:89] "kube-scheduler-no-preload-321164" [7a45c187-7365-4144-ae68-ba42b1069afd] Running
	I0907 00:56:46.314634   46768 system_pods.go:89] "metrics-server-57f55c9bc5-vgngs" [9036423c-c4f7-4beb-92da-e106b8af306c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:56:46.314645   46768 system_pods.go:89] "storage-provisioner" [58bbe692-61d0-466d-b6bf-28af2faf4ec9] Running
	I0907 00:56:46.314653   46768 system_pods.go:126] duration metric: took 207.48874ms to wait for k8s-apps to be running ...
	I0907 00:56:46.314663   46768 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:56:46.314713   46768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:56:46.331286   46768 system_svc.go:56] duration metric: took 16.613382ms WaitForService to wait for kubelet.
	I0907 00:56:46.331316   46768 kubeadm.go:581] duration metric: took 5.520640777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:56:46.331342   46768 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:56:46.507374   46768 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:56:46.507398   46768 node_conditions.go:123] node cpu capacity is 2
	I0907 00:56:46.507406   46768 node_conditions.go:105] duration metric: took 176.059527ms to run NodePressure ...
	I0907 00:56:46.507417   46768 start.go:228] waiting for startup goroutines ...
	I0907 00:56:46.507422   46768 start.go:233] waiting for cluster config update ...
	I0907 00:56:46.507433   46768 start.go:242] writing updated cluster config ...
	I0907 00:56:46.507728   46768 ssh_runner.go:195] Run: rm -f paused
	I0907 00:56:46.559712   46768 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0907 00:56:46.561693   46768 out.go:177] * Done! kubectl is now configured to use "no-preload-321164" cluster and "default" namespace by default
	I0907 00:56:43.245531   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:45.746168   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:48.247228   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:50.746605   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:52.748264   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:55.246186   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:56:57.746658   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:00.245358   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:02.246373   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:04.746154   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:07.245583   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:09.246215   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:11.247141   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.247249   46354 pod_ready.go:102] pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:13.440321   46354 pod_ready.go:81] duration metric: took 4m0.000811237s waiting for pod "metrics-server-74d5856cc6-6s7hd" in "kube-system" namespace to be "Ready" ...
	E0907 00:57:13.440352   46354 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0907 00:57:13.440368   46354 pod_ready.go:38] duration metric: took 4m1.198343499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:13.440395   46354 kubeadm.go:640] restartCluster took 5m7.071390852s
	W0907 00:57:13.440463   46354 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0907 00:57:13.440538   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0907 00:57:26.505313   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.064737983s)
	I0907 00:57:26.505392   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:26.521194   46354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:57:26.530743   46354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:57:26.540431   46354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:57:26.540473   46354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0907 00:57:26.744360   46354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:57:39.131760   46354 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0907 00:57:39.131857   46354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0907 00:57:39.131964   46354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:57:39.132110   46354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:57:39.132226   46354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0907 00:57:39.132360   46354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:57:39.132501   46354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:57:39.132573   46354 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0907 00:57:39.132654   46354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:57:39.134121   46354 out.go:204]   - Generating certificates and keys ...
	I0907 00:57:39.134212   46354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0907 00:57:39.134313   46354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0907 00:57:39.134422   46354 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0907 00:57:39.134501   46354 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0907 00:57:39.134605   46354 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0907 00:57:39.134688   46354 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0907 00:57:39.134801   46354 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0907 00:57:39.134902   46354 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0907 00:57:39.135010   46354 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0907 00:57:39.135121   46354 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0907 00:57:39.135169   46354 kubeadm.go:322] [certs] Using the existing "sa" key
	I0907 00:57:39.135241   46354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:57:39.135308   46354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:57:39.135393   46354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:57:39.135512   46354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:57:39.135599   46354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:57:39.135700   46354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:57:39.137273   46354 out.go:204]   - Booting up control plane ...
	I0907 00:57:39.137369   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:57:39.137458   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:57:39.137561   46354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:57:39.137677   46354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:57:39.137888   46354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0907 00:57:39.138013   46354 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503675 seconds
	I0907 00:57:39.138137   46354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:57:39.138249   46354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:57:39.138297   46354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:57:39.138402   46354 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-940806 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0907 00:57:39.138453   46354 kubeadm.go:322] [bootstrap-token] Using token: nfcsq1.o4ef3s2bthacz2l0
	I0907 00:57:39.139754   46354 out.go:204]   - Configuring RBAC rules ...
	I0907 00:57:39.139848   46354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:57:39.139970   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:57:39.140112   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:57:39.140245   46354 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:57:39.140327   46354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:57:39.140393   46354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0907 00:57:39.140442   46354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0907 00:57:39.140452   46354 kubeadm.go:322] 
	I0907 00:57:39.140525   46354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0907 00:57:39.140533   46354 kubeadm.go:322] 
	I0907 00:57:39.140628   46354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0907 00:57:39.140635   46354 kubeadm.go:322] 
	I0907 00:57:39.140665   46354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0907 00:57:39.140752   46354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:57:39.140822   46354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:57:39.140834   46354 kubeadm.go:322] 
	I0907 00:57:39.140896   46354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0907 00:57:39.140960   46354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:57:39.141043   46354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:57:39.141051   46354 kubeadm.go:322] 
	I0907 00:57:39.141159   46354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0907 00:57:39.141262   46354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0907 00:57:39.141276   46354 kubeadm.go:322] 
	I0907 00:57:39.141407   46354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141536   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c \
	I0907 00:57:39.141568   46354 kubeadm.go:322]     --control-plane 	  
	I0907 00:57:39.141575   46354 kubeadm.go:322] 
	I0907 00:57:39.141657   46354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:57:39.141665   46354 kubeadm.go:322] 
	I0907 00:57:39.141730   46354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nfcsq1.o4ef3s2bthacz2l0 \
	I0907 00:57:39.141832   46354 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8f4bdea8bf9859aeafc5c58f542aed2638948c62043fc842b541b3aa284caf2c 
	I0907 00:57:39.141851   46354 cni.go:84] Creating CNI manager for ""
	I0907 00:57:39.141863   46354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:57:39.143462   46354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:57:39.144982   46354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:57:39.158663   46354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0907 00:57:39.180662   46354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:57:39.180747   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.180749   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=old-k8s-version-940806 minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.208969   46354 ops.go:34] apiserver oom_adj: -16
	I0907 00:57:39.426346   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:39.545090   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.162127   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:40.662172   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.162069   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:41.662164   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.162355   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:42.662152   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.161862   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:43.661532   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.162130   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:44.661948   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.162260   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:45.662082   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.162345   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:46.662378   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.162307   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:47.662556   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.162204   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:48.661938   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.161608   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:49.662198   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.162016   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:50.662392   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.162303   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:51.662393   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.162510   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:52.662195   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.162302   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:53.662427   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.162085   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.662218   46354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:57:54.779895   46354 kubeadm.go:1081] duration metric: took 15.599222217s to wait for elevateKubeSystemPrivileges.
	I0907 00:57:54.779927   46354 kubeadm.go:406] StartCluster complete in 5m48.456500898s
	I0907 00:57:54.779949   46354 settings.go:142] acquiring lock: {Name:mk70176f1f3b72bac4754a7455492f18c5cd378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.780038   46354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:57:54.782334   46354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/kubeconfig: {Name:mkdda1adef658dc7d0effc48f2bfbbe09125150f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:57:54.782624   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:57:54.782772   46354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0907 00:57:54.782871   46354 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782890   46354 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-940806"
	I0907 00:57:54.782900   46354 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-940806"
	W0907 00:57:54.782908   46354 addons.go:240] addon storage-provisioner should already be in state true
	I0907 00:57:54.782918   46354 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-940806"
	W0907 00:57:54.782926   46354 addons.go:240] addon metrics-server should already be in state true
	I0907 00:57:54.782880   46354 config.go:182] Loaded profile config "old-k8s-version-940806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:57:54.782889   46354 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-940806"
	I0907 00:57:54.783049   46354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-940806"
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.782963   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.783499   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783500   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783528   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783533   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.783571   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.783599   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.802026   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44005
	I0907 00:57:54.802487   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803108   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.803131   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0907 00:57:54.803164   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0907 00:57:54.803512   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.803674   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.803710   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.804184   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.804215   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.804239   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804259   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804311   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.804327   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.804569   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804668   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.804832   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.805067   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.805094   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.821660   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0907 00:57:54.822183   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.822694   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.822720   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.823047   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.823247   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.823707   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0907 00:57:54.824135   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.825021   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.825046   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.825082   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.827174   46354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0907 00:57:54.825428   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.828768   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:57:54.828787   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:57:54.828808   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.829357   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.831479   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.833553   46354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:57:54.832288   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.832776   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.834996   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.835038   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.835055   46354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:54.835067   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:57:54.835083   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.835140   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.835307   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.835410   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.836403   46354 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-940806"
	W0907 00:57:54.836424   46354 addons.go:240] addon default-storageclass should already be in state true
	I0907 00:57:54.836451   46354 host.go:66] Checking if "old-k8s-version-940806" exists ...
	I0907 00:57:54.836822   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.836851   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.838476   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.838920   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.838951   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.839218   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.839540   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.839719   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.839896   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.854883   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0907 00:57:54.855311   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.855830   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.855858   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.856244   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.856713   46354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:57:54.856737   46354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:57:54.872940   46354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39937
	I0907 00:57:54.873442   46354 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:57:54.874030   46354 main.go:141] libmachine: Using API Version  1
	I0907 00:57:54.874057   46354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:57:54.874433   46354 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:57:54.874665   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetState
	I0907 00:57:54.876568   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .DriverName
	I0907 00:57:54.876928   46354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:54.876947   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:57:54.876966   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHHostname
	I0907 00:57:54.879761   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.879993   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:83:50", ip: ""} in network mk-old-k8s-version-940806: {Iface:virbr2 ExpiryTime:2023-09-07 01:51:46 +0000 UTC Type:0 Mac:52:54:00:1f:83:50 Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-940806 Clientid:01:52:54:00:1f:83:50}
	I0907 00:57:54.880015   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | domain old-k8s-version-940806 has defined IP address 192.168.83.245 and MAC address 52:54:00:1f:83:50 in network mk-old-k8s-version-940806
	I0907 00:57:54.880248   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHPort
	I0907 00:57:54.880424   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHKeyPath
	I0907 00:57:54.880591   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .GetSSHUsername
	I0907 00:57:54.880694   46354 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/old-k8s-version-940806/id_rsa Username:docker}
	I0907 00:57:54.933915   46354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-940806" context rescaled to 1 replicas
	I0907 00:57:54.933965   46354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:57:54.936214   46354 out.go:177] * Verifying Kubernetes components...
	I0907 00:57:54.937844   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:57:55.011087   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:57:55.011114   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0907 00:57:55.020666   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:57:55.038411   46354 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.038474   46354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:57:55.066358   46354 node_ready.go:49] node "old-k8s-version-940806" has status "Ready":"True"
	I0907 00:57:55.066382   46354 node_ready.go:38] duration metric: took 27.94281ms waiting for node "old-k8s-version-940806" to be "Ready" ...
	I0907 00:57:55.066393   46354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:57:55.076936   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	I0907 00:57:55.118806   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:57:55.118835   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:57:55.145653   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:57:55.158613   46354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:55.158636   46354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:57:55.214719   46354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:57:56.905329   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.884630053s)
	I0907 00:57:56.905379   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905377   46354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866875113s)
	I0907 00:57:56.905392   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905403   46354 start.go:901] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0907 00:57:56.905417   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759735751s)
	I0907 00:57:56.905441   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905455   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905794   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905842   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.905858   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.905878   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.905895   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.905910   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.905963   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906013   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906037   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906047   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906286   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906340   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906293   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906325   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906436   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:56.906449   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:56.906459   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:56.906630   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906729   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:56.906732   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:56.906749   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.087889   46354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.873113752s)
	I0907 00:57:57.087946   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.087979   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.088366   46354 main.go:141] libmachine: (old-k8s-version-940806) DBG | Closing plugin on server side
	I0907 00:57:57.089849   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.089880   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.089892   46354 main.go:141] libmachine: Making call to close driver server
	I0907 00:57:57.089899   46354 main.go:141] libmachine: (old-k8s-version-940806) Calling .Close
	I0907 00:57:57.090126   46354 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:57:57.090146   46354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:57:57.090155   46354 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-940806"
	I0907 00:57:57.093060   46354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0907 00:57:57.094326   46354 addons.go:502] enable addons completed in 2.311555161s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0907 00:57:57.115594   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:57:59.609005   46354 pod_ready.go:102] pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace has status "Ready":"False"
	I0907 00:58:00.605260   46354 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605285   46354 pod_ready.go:81] duration metric: took 5.528319392s waiting for pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace to be "Ready" ...
	E0907 00:58:00.605296   46354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rf6lv" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rf6lv" not found
	I0907 00:58:00.605305   46354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.623994   46354 pod_ready.go:92] pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.624020   46354 pod_ready.go:81] duration metric: took 2.01870868s waiting for pod "coredns-5644d7b6d9-rvbpw" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.624039   46354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629264   46354 pod_ready.go:92] pod "kube-proxy-bt454" in "kube-system" namespace has status "Ready":"True"
	I0907 00:58:02.629282   46354 pod_ready.go:81] duration metric: took 5.236562ms waiting for pod "kube-proxy-bt454" in "kube-system" namespace to be "Ready" ...
	I0907 00:58:02.629288   46354 pod_ready.go:38] duration metric: took 7.562884581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0907 00:58:02.629301   46354 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:58:02.629339   46354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:58:02.644494   46354 api_server.go:72] duration metric: took 7.710498225s to wait for apiserver process to appear ...
	I0907 00:58:02.644515   46354 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:58:02.644529   46354 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I0907 00:58:02.651352   46354 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I0907 00:58:02.652147   46354 api_server.go:141] control plane version: v1.16.0
	I0907 00:58:02.652186   46354 api_server.go:131] duration metric: took 7.646808ms to wait for apiserver health ...
	I0907 00:58:02.652199   46354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:58:02.656482   46354 system_pods.go:59] 4 kube-system pods found
	I0907 00:58:02.656506   46354 system_pods.go:61] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.656513   46354 system_pods.go:61] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.656524   46354 system_pods.go:61] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.656534   46354 system_pods.go:61] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.656541   46354 system_pods.go:74] duration metric: took 4.333279ms to wait for pod list to return data ...
	I0907 00:58:02.656553   46354 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:58:02.659079   46354 default_sa.go:45] found service account: "default"
	I0907 00:58:02.659102   46354 default_sa.go:55] duration metric: took 2.543265ms for default service account to be created ...
	I0907 00:58:02.659110   46354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:58:02.663028   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.663050   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.663058   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.663069   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.663077   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.663094   46354 retry.go:31] will retry after 205.506153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:02.874261   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:02.874291   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:02.874299   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:02.874309   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:02.874318   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:02.874335   46354 retry.go:31] will retry after 265.617543ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.145704   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.145736   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.145745   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.145755   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.145764   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.145782   46354 retry.go:31] will retry after 459.115577ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:03.610425   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:03.610458   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:03.610466   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:03.610474   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:03.610482   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:03.610498   46354 retry.go:31] will retry after 411.97961ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.026961   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.026992   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.026997   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.027004   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.027011   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.027024   46354 retry.go:31] will retry after 633.680519ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:04.665840   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:04.665868   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:04.665877   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:04.665889   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:04.665899   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:04.665916   46354 retry.go:31] will retry after 680.962565ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:05.352621   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:05.352644   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:05.352652   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:05.352699   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:05.352710   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:05.352725   46354 retry.go:31] will retry after 939.996523ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:06.298740   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:06.298765   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:06.298770   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:06.298791   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:06.298803   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:06.298820   46354 retry.go:31] will retry after 1.103299964s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:07.407728   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:07.407753   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:07.407758   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:07.407766   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:07.407772   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:07.407785   46354 retry.go:31] will retry after 1.13694803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:08.550198   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:08.550228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:08.550236   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:08.550245   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:08.550252   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:08.550269   46354 retry.go:31] will retry after 2.240430665s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:10.796203   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:10.796228   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:10.796233   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:10.796240   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:10.796246   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:10.796261   46354 retry.go:31] will retry after 2.183105097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:12.985467   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:12.985491   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:12.985500   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:12.985510   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:12.985518   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:12.985535   46354 retry.go:31] will retry after 2.428546683s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:15.419138   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:15.419163   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:15.419168   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:15.419174   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:15.419181   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:15.419195   46354 retry.go:31] will retry after 2.778392129s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:18.202590   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:18.202621   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:18.202629   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:18.202639   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:18.202648   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:18.202670   46354 retry.go:31] will retry after 5.204092587s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:23.412120   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:23.412144   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:23.412157   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:23.412164   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:23.412171   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:23.412187   46354 retry.go:31] will retry after 6.095121382s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:29.513424   46354 system_pods.go:86] 4 kube-system pods found
	I0907 00:58:29.513449   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:29.513454   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:29.513462   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:29.513468   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:29.513482   46354 retry.go:31] will retry after 6.142679131s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:35.662341   46354 system_pods.go:86] 5 kube-system pods found
	I0907 00:58:35.662367   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:35.662372   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:35.662377   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Pending
	I0907 00:58:35.662383   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:35.662390   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:35.662408   46354 retry.go:31] will retry after 10.800349656s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0907 00:58:46.468817   46354 system_pods.go:86] 6 kube-system pods found
	I0907 00:58:46.468845   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:46.468854   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:46.468859   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:46.468867   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:46.468876   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:46.468884   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:46.468901   46354 retry.go:31] will retry after 10.570531489s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:58:57.047784   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:58:57.047865   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:58:57.047892   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:58:57.048256   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Pending
	I0907 00:58:57.048272   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Pending
	I0907 00:58:57.048279   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:58:57.048286   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:58:57.048301   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:58:57.048315   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:58:57.048345   46354 retry.go:31] will retry after 14.06926028s: missing components: kube-apiserver, kube-controller-manager
	I0907 00:59:11.124216   46354 system_pods.go:86] 8 kube-system pods found
	I0907 00:59:11.124242   46354 system_pods.go:89] "coredns-5644d7b6d9-rvbpw" [c3e1982c-3155-42a5-b265-97954da89614] Running
	I0907 00:59:11.124248   46354 system_pods.go:89] "etcd-old-k8s-version-940806" [e1b66998-1a84-4ee0-90bd-b776f3906aa4] Running
	I0907 00:59:11.124252   46354 system_pods.go:89] "kube-apiserver-old-k8s-version-940806" [6a513b1a-cad2-4136-a7b0-a86df04f6c09] Running
	I0907 00:59:11.124257   46354 system_pods.go:89] "kube-controller-manager-old-k8s-version-940806" [5ff6ffdb-1b2c-4498-84ad-e2811a8dd16a] Running
	I0907 00:59:11.124261   46354 system_pods.go:89] "kube-proxy-bt454" [941e0f06-6bdf-4491-a498-1286919f0d1a] Running
	I0907 00:59:11.124265   46354 system_pods.go:89] "kube-scheduler-old-k8s-version-940806" [1f7746e3-365b-4986-9222-4fbfe033e99d] Running
	I0907 00:59:11.124272   46354 system_pods.go:89] "metrics-server-74d5856cc6-bgjns" [5fc290ae-921e-4c42-8b68-917a042aa083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:59:11.124276   46354 system_pods.go:89] "storage-provisioner" [13b357bf-80b7-4fb0-90ec-c4ea3df3de88] Running
	I0907 00:59:11.124283   46354 system_pods.go:126] duration metric: took 1m8.465167722s to wait for k8s-apps to be running ...
	I0907 00:59:11.124289   46354 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:59:11.124328   46354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:59:11.140651   46354 system_svc.go:56] duration metric: took 16.348641ms WaitForService to wait for kubelet.
	I0907 00:59:11.140686   46354 kubeadm.go:581] duration metric: took 1m16.206690472s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0907 00:59:11.140714   46354 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:59:11.144185   46354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0907 00:59:11.144212   46354 node_conditions.go:123] node cpu capacity is 2
	I0907 00:59:11.144224   46354 node_conditions.go:105] duration metric: took 3.50462ms to run NodePressure ...
	I0907 00:59:11.144235   46354 start.go:228] waiting for startup goroutines ...
	I0907 00:59:11.144244   46354 start.go:233] waiting for cluster config update ...
	I0907 00:59:11.144259   46354 start.go:242] writing updated cluster config ...
	I0907 00:59:11.144547   46354 ssh_runner.go:195] Run: rm -f paused
	I0907 00:59:11.194224   46354 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0907 00:59:11.196420   46354 out.go:177] 
	W0907 00:59:11.197939   46354 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0907 00:59:11.199287   46354 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0907 00:59:11.200770   46354 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-940806" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-07 00:51:46 UTC, ends at Thu 2023-09-07 01:10:57 UTC. --
	Sep 07 01:10:56 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:56.534882565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a186556e-3ddb-49e9-9ef9-c689c86be2a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:56 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:56.984309162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbc3ba6d-5913-4375-91ab-b3a5b0ded43c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:56 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:56.984401593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbc3ba6d-5913-4375-91ab-b3a5b0ded43c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:56 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:56.984638397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbc3ba6d-5913-4375-91ab-b3a5b0ded43c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.020296058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f0a8203-564e-4249-8146-5bc188372313 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.020380506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f0a8203-564e-4249-8146-5bc188372313 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.020550848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f0a8203-564e-4249-8146-5bc188372313 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.056630456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c45f09ac-ba89-4724-95ec-edf5228f8c09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.056696768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c45f09ac-ba89-4724-95ec-edf5228f8c09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.056848122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c45f09ac-ba89-4724-95ec-edf5228f8c09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.092345256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62fc54fd-4a17-4946-8697-787d03d6928a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.092536652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62fc54fd-4a17-4946-8697-787d03d6928a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.092707223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62fc54fd-4a17-4946-8697-787d03d6928a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.131485309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aee1b178-4cd7-4880-af44-a09f68ac8bf6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.131579570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aee1b178-4cd7-4880-af44-a09f68ac8bf6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.131738299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aee1b178-4cd7-4880-af44-a09f68ac8bf6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.166762353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5609fa05-8976-4808-96c5-5e0cbbad930f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.166859517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5609fa05-8976-4808-96c5-5e0cbbad930f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.167179238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5609fa05-8976-4808-96c5-5e0cbbad930f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.201701150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=caaa7679-496e-40b9-8a02-cb3a8a920734 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.201826071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=caaa7679-496e-40b9-8a02-cb3a8a920734 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.202151659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=caaa7679-496e-40b9-8a02-cb3a8a920734 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.233867275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b6e37eba-8c79-47ef-9c45-408924721f9e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.234016658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b6e37eba-8c79-47ef-9c45-408924721f9e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 07 01:10:57 old-k8s-version-940806 crio[712]: time="2023-09-07 01:10:57.234216682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248,PodSandboxId:a22a0983e839b2ab47051570e013aba71c8729b529d75eb9d537e4905d7b37b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694048277773764547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b357bf-80b7-4fb0-90ec-c4ea3df3de88,},Annotations:map[string]string{io.kubernetes.container.hash: 8b0f9b73,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958,PodSandboxId:f45d4c026df49da38a9beed4d6f269cb0657d94e6fefad6583c833ef9d309183,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694048277475545565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bt454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 941e0f06-6bdf-4491-a498-1286919f0d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2b07be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219,PodSandboxId:3985a65b07e08f1849f8f7c34cf8cb3cf31fb0357ea380085e2ccb9865090ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694048276310387554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-rvbpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e1982c-3155-42a5-b265-97954da89614,},Annotations:map[string]string{io.kubernetes.container.hash: 822cfd96,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6,PodSandboxId:712c8a64609b9c63532e726c0b3dfaed447f2458e514b10d38dc398ada177ede,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694048250818254213,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5c11213c8d18acd8c33db64a941705b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8a841940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e,PodSandboxId:0fa66fd27dad8ea46ec1ac3441ebffb3b0b6fd844fe49357ed5c4c43944436f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694048249476329062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc,PodSandboxId:f4de0cb85a1d10d733afa2c6b538f3eecc2b0d17b52261729aac14e24361fafb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694048249126342714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f189641ddb00c33c542b58205bb406e,},Annotations:map[string]string{io.kubern
etes.container.hash: acfefdaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046,PodSandboxId:22cc68c770b8a4441b410bff330436cabc8a70debcce9958acbab707cc6513c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694048249065241339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-940806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b6e37eba-8c79-47ef-9c45-408924721f9e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	505fd87a59c43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   a22a0983e839b
	c16bcf217c95b       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   f45d4c026df49
	dcb0272fd2f33       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   3985a65b07e08
	6e0283355220b       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   712c8a64609b9
	9b851c02c8fdc       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   0fa66fd27dad8
	8e12829e3eb63       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   f4de0cb85a1d1
	00ea9e73f82d0       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   22cc68c770b8a
	
	* 
	* ==> coredns [dcb0272fd2f33196eb8dad03f7c130a4b2bbe4f88287952a004e49a83bef1219] <==
	* .:53
	2023-09-07T00:57:56.709Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-07T00:57:56.709Z [INFO] CoreDNS-1.6.2
	2023-09-07T00:57:56.709Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-09-07T00:58:21.040Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	[INFO] Reloading complete
	2023-09-07T00:58:21.060Z [INFO] 127.0.0.1:42553 - 6572 "HINFO IN 7264912749835336230.1648971124024017391. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020135645s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-940806
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-940806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=old-k8s-version-940806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_07T00_57_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Sep 2023 00:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Sep 2023 01:10:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Sep 2023 01:10:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Sep 2023 01:10:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Sep 2023 01:10:34 +0000   Thu, 07 Sep 2023 00:57:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.245
	  Hostname:    old-k8s-version-940806
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 d1c883ac860c4cecba55236dd31e2013
	 System UUID:                d1c883ac-860c-4cec-ba55-236dd31e2013
	 Boot ID:                    4ad06931-8146-4d72-8fdb-ee1d1da21cbd
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-rvbpw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-940806                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-940806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-940806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-bt454                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-940806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-bgjns                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet, old-k8s-version-940806     Node old-k8s-version-940806 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-940806  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep 7 00:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.520999] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158046] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.568740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.821645] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.132260] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.150284] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.115884] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.221809] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Sep 7 00:52] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +0.440443] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.318866] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.818834] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 7 00:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.645504] systemd-fstab-generator[3224]: Ignoring "noauto" for root device
	[Sep 7 00:58] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [6e0283355220b3c5d18d052dc0fba6bdc16b1ff8ca023f5ca069eac443d9dcb6] <==
	* 2023-09-07 00:57:30.993201 I | raft: ba939c90038af751 became follower at term 1
	2023-09-07 00:57:31.002432 W | auth: simple token is not cryptographically signed
	2023-09-07 00:57:31.009348 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-07 00:57:31.010664 I | etcdserver: ba939c90038af751 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-07 00:57:31.011122 I | etcdserver/membership: added member ba939c90038af751 [https://192.168.83.245:2380] to cluster 1b1c08270f79fa14
	2023-09-07 00:57:31.012746 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-07 00:57:31.013176 I | embed: listening for metrics on http://192.168.83.245:2381
	2023-09-07 00:57:31.013365 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-07 00:57:31.096479 I | raft: ba939c90038af751 is starting a new election at term 1
	2023-09-07 00:57:31.096584 I | raft: ba939c90038af751 became candidate at term 2
	2023-09-07 00:57:31.096670 I | raft: ba939c90038af751 received MsgVoteResp from ba939c90038af751 at term 2
	2023-09-07 00:57:31.096727 I | raft: ba939c90038af751 became leader at term 2
	2023-09-07 00:57:31.096750 I | raft: raft.node: ba939c90038af751 elected leader ba939c90038af751 at term 2
	2023-09-07 00:57:31.097337 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-07 00:57:31.097719 I | etcdserver: published {Name:old-k8s-version-940806 ClientURLs:[https://192.168.83.245:2379]} to cluster 1b1c08270f79fa14
	2023-09-07 00:57:31.097979 I | embed: ready to serve client requests
	2023-09-07 00:57:31.099224 I | embed: serving client requests on 192.168.83.245:2379
	2023-09-07 00:57:31.099430 I | embed: ready to serve client requests
	2023-09-07 00:57:31.100807 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-07 00:57:31.101466 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-07 00:57:31.101665 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-07 00:57:56.143108 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-940806\" " with result "range_response_count:1 size:4370" took too long (316.707587ms) to execute
	2023-09-07 00:57:56.634187 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (145.961145ms) to execute
	2023-09-07 01:07:31.142046 I | mvcc: store.index: compact 666
	2023-09-07 01:07:31.144308 I | mvcc: finished scheduled compaction at 666 (took 1.763277ms)
	
	* 
	* ==> kernel <==
	*  01:10:57 up 19 min,  0 users,  load average: 0.29, 0.14, 0.15
	Linux old-k8s-version-940806 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8e12829e3eb6386c9a326911a077a03a29bc3d7451c894c51f095accc689a5fc] <==
	* I0907 01:03:35.378301       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:03:35.378428       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:03:35.378467       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:03:35.378478       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:05:35.379251       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:05:35.379458       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:05:35.379638       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:05:35.379655       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:07:35.381527       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:07:35.381690       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:07:35.381768       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:07:35.381776       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:08:35.382223       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:08:35.382355       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:08:35.382425       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:08:35.382436       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0907 01:10:35.382751       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0907 01:10:35.382865       1 handler_proxy.go:99] no RequestInfo found in the context
	E0907 01:10:35.383014       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0907 01:10:35.383026       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [00ea9e73f82d09fadb6493347403ebb4ed5a0c3a285d298c78cd055be88cf046] <==
	* E0907 01:04:28.072978       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:04:51.900385       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:04:58.325003       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:05:23.903236       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:05:28.577190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:05:55.905500       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:05:58.829313       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:06:27.908431       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:06:29.081499       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0907 01:06:59.333698       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:06:59.910795       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:07:29.586368       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:07:31.913024       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:07:59.838116       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:08:03.915296       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:08:30.090609       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:08:35.917231       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:09:00.343136       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:09:07.919753       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:09:30.595359       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:09:39.922235       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:10:00.847860       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:10:11.924513       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0907 01:10:31.100385       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0907 01:10:43.926468       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [c16bcf217c95b775f32eaa13fd39003428a71ffd803dadf9fae36a2735722958] <==
	* W0907 00:57:57.740507       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0907 00:57:57.765234       1 node.go:135] Successfully retrieved node IP: 192.168.83.245
	I0907 00:57:57.765445       1 server_others.go:149] Using iptables Proxier.
	I0907 00:57:57.769177       1 server.go:529] Version: v1.16.0
	I0907 00:57:57.775835       1 config.go:131] Starting endpoints config controller
	I0907 00:57:57.775907       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0907 00:57:57.776137       1 config.go:313] Starting service config controller
	I0907 00:57:57.776169       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0907 00:57:57.891683       1 shared_informer.go:204] Caches are synced for service config 
	I0907 00:57:57.891880       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [9b851c02c8fdc32e4e68e18fd44f22b6dcb22d40f4df502565e004e1c9d2b38e] <==
	* W0907 00:57:34.393785       1 authentication.go:79] Authentication is disabled
	I0907 00:57:34.393804       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0907 00:57:34.397502       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0907 00:57:34.426244       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0907 00:57:34.427582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0907 00:57:34.432574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:57:34.432765       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:57:34.433707       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:57:34.433775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:57:34.434201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:57:34.434334       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:57:34.443826       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:34.444597       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:34.448161       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0907 00:57:35.429060       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0907 00:57:35.436058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0907 00:57:35.436254       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0907 00:57:35.443397       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0907 00:57:35.447419       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0907 00:57:35.448015       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0907 00:57:35.449405       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0907 00:57:35.452036       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0907 00:57:35.453809       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:35.454882       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0907 00:57:35.456617       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-07 00:51:46 UTC, ends at Thu 2023-09-07 01:10:57 UTC. --
	Sep 07 01:06:28 old-k8s-version-940806 kubelet[3230]: E0907 01:06:28.055424    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:41 old-k8s-version-940806 kubelet[3230]: E0907 01:06:41.047079    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:06:56 old-k8s-version-940806 kubelet[3230]: E0907 01:06:56.046418    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:09 old-k8s-version-940806 kubelet[3230]: E0907 01:07:09.046507    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:20 old-k8s-version-940806 kubelet[3230]: E0907 01:07:20.046597    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:28 old-k8s-version-940806 kubelet[3230]: E0907 01:07:28.147036    3230 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 07 01:07:34 old-k8s-version-940806 kubelet[3230]: E0907 01:07:34.046605    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:46 old-k8s-version-940806 kubelet[3230]: E0907 01:07:46.046242    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:07:57 old-k8s-version-940806 kubelet[3230]: E0907 01:07:57.046389    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:08:11 old-k8s-version-940806 kubelet[3230]: E0907 01:08:11.046741    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:08:23 old-k8s-version-940806 kubelet[3230]: E0907 01:08:23.046831    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:08:37 old-k8s-version-940806 kubelet[3230]: E0907 01:08:37.046083    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:08:51 old-k8s-version-940806 kubelet[3230]: E0907 01:08:51.070402    3230 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:08:51 old-k8s-version-940806 kubelet[3230]: E0907 01:08:51.070481    3230 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:08:51 old-k8s-version-940806 kubelet[3230]: E0907 01:08:51.070529    3230 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 07 01:08:51 old-k8s-version-940806 kubelet[3230]: E0907 01:08:51.070559    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 07 01:09:04 old-k8s-version-940806 kubelet[3230]: E0907 01:09:04.046390    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:09:18 old-k8s-version-940806 kubelet[3230]: E0907 01:09:18.048082    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:09:33 old-k8s-version-940806 kubelet[3230]: E0907 01:09:33.046827    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:09:48 old-k8s-version-940806 kubelet[3230]: E0907 01:09:48.047149    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:10:00 old-k8s-version-940806 kubelet[3230]: E0907 01:10:00.046737    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:10:12 old-k8s-version-940806 kubelet[3230]: E0907 01:10:12.046668    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:10:25 old-k8s-version-940806 kubelet[3230]: E0907 01:10:25.047067    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:10:38 old-k8s-version-940806 kubelet[3230]: E0907 01:10:38.046429    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 07 01:10:52 old-k8s-version-940806 kubelet[3230]: E0907 01:10:52.047377    3230 pod_workers.go:191] Error syncing pod 5fc290ae-921e-4c42-8b68-917a042aa083 ("metrics-server-74d5856cc6-bgjns_kube-system(5fc290ae-921e-4c42-8b68-917a042aa083)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [505fd87a59c439d8ab2f8c47ec9fc39b2ecfb51feee0f14c878a1a281d5ba248] <==
	* I0907 00:57:57.911712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:57:57.923115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:57:57.923199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0907 00:57:57.937471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:57:57.938554       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e!
	I0907 00:57:57.952853       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acca27ea-be6d-42da-a4af-f108a00ace8f", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e became leader
	I0907 00:57:58.040208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-940806_c208f978-8d30-4fb4-b9b1-cd6dc7be4c2e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-940806 -n old-k8s-version-940806
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-940806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-bgjns
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns: exit status 1 (65.586225ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-bgjns" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-940806 describe pod metrics-server-74d5856cc6-bgjns: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (164.31s)

                                                
                                    

Test pass (226/290)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 44.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.1/json-events 16.21
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.55
20 TestOffline 103.12
22 TestAddons/Setup 154.29
24 TestAddons/parallel/Registry 17.66
26 TestAddons/parallel/InspektorGadget 11.96
27 TestAddons/parallel/MetricsServer 6.74
28 TestAddons/parallel/HelmTiller 14.82
30 TestAddons/parallel/CSI 80.35
31 TestAddons/parallel/Headlamp 16.62
32 TestAddons/parallel/CloudSpanner 6.14
35 TestAddons/serial/GCPAuth/Namespaces 0.12
37 TestCertOptions 51.84
38 TestCertExpiration 277.74
40 TestForceSystemdFlag 83.47
41 TestForceSystemdEnv 79.42
43 TestKVMDriverInstallOrUpdate 3.98
47 TestErrorSpam/setup 48.4
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.74
50 TestErrorSpam/pause 1.45
51 TestErrorSpam/unpause 1.56
52 TestErrorSpam/stop 2.2
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 99.18
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 53.32
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
64 TestFunctional/serial/CacheCmd/cache/add_local 2.23
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 35.44
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.34
75 TestFunctional/serial/LogsFileCmd 1.35
76 TestFunctional/serial/InvalidService 4.93
78 TestFunctional/parallel/ConfigCmd 0.28
79 TestFunctional/parallel/DashboardCmd 28.6
80 TestFunctional/parallel/DryRun 0.29
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 1.29
86 TestFunctional/parallel/ServiceCmdConnect 13.76
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 49.48
90 TestFunctional/parallel/SSHCmd 0.4
91 TestFunctional/parallel/CpCmd 0.85
92 TestFunctional/parallel/MySQL 33.61
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
102 TestFunctional/parallel/License 0.59
103 TestFunctional/parallel/Version/short 0.04
104 TestFunctional/parallel/Version/components 1.32
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
109 TestFunctional/parallel/ImageCommands/ImageBuild 12.17
110 TestFunctional/parallel/ImageCommands/Setup 2.02
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
124 TestFunctional/parallel/ProfileCmd/profile_list 0.27
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
126 TestFunctional/parallel/ServiceCmd/DeployApp 13.42
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.56
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.47
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.26
130 TestFunctional/parallel/ServiceCmd/List 0.34
131 TestFunctional/parallel/MountCmd/any-port 11.14
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
134 TestFunctional/parallel/ServiceCmd/Format 0.43
135 TestFunctional/parallel/ServiceCmd/URL 0.34
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.26
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.09
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.2
140 TestFunctional/parallel/MountCmd/specific-port 1.86
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 84.25
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.49
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
155 TestJSONOutput/start/Command 101.48
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.65
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.61
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 92.17
174 TestJSONOutput/stop/Audit 0
178 TestErrorJSONOutput 0.18
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 99.36
187 TestMountStart/serial/StartWithMountFirst 32.18
188 TestMountStart/serial/VerifyMountFirst 0.37
189 TestMountStart/serial/StartWithMountSecond 28.17
190 TestMountStart/serial/VerifyMountSecond 0.38
191 TestMountStart/serial/DeleteFirst 0.84
192 TestMountStart/serial/VerifyMountPostDelete 0.38
193 TestMountStart/serial/Stop 1.21
194 TestMountStart/serial/RestartStopped 24.7
195 TestMountStart/serial/VerifyMountPostStop 0.36
198 TestMultiNode/serial/FreshStart2Nodes 123.74
199 TestMultiNode/serial/DeployApp2Nodes 6.43
201 TestMultiNode/serial/AddNode 45.13
202 TestMultiNode/serial/ProfileList 0.21
203 TestMultiNode/serial/CopyFile 7.4
204 TestMultiNode/serial/StopNode 2.27
205 TestMultiNode/serial/StartAfterStop 34.11
207 TestMultiNode/serial/DeleteNode 1.75
209 TestMultiNode/serial/RestartMultiNode 444.32
210 TestMultiNode/serial/ValidateNameConflict 50.05
217 TestScheduledStopUnix 116.5
223 TestKubernetesUpgrade 172.17
226 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
227 TestNoKubernetes/serial/StartWithK8s 106.22
236 TestPause/serial/Start 69.09
237 TestNoKubernetes/serial/StartWithStopK8s 41.44
238 TestNoKubernetes/serial/Start 33.83
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
241 TestNoKubernetes/serial/ProfileList 0.92
242 TestNoKubernetes/serial/Stop 2.23
243 TestNoKubernetes/serial/StartNoArgs 53.55
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
252 TestNetworkPlugins/group/false 2.77
256 TestStoppedBinaryUpgrade/Setup 1.96
259 TestStartStop/group/old-k8s-version/serial/FirstStart 132.22
261 TestStartStop/group/no-preload/serial/FirstStart 84.94
263 TestStartStop/group/embed-certs/serial/FirstStart 62.83
264 TestStartStop/group/old-k8s-version/serial/DeployApp 11.46
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
267 TestStartStop/group/no-preload/serial/DeployApp 11.48
268 TestStartStop/group/embed-certs/serial/DeployApp 10.47
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
271 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.37
275 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.51
277 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
278 TestStartStop/group/old-k8s-version/serial/SecondStart 798.29
279 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
283 TestStartStop/group/no-preload/serial/SecondStart 611.65
284 TestStartStop/group/embed-certs/serial/SecondStart 566.12
286 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 474.36
296 TestStartStop/group/newest-cni/serial/FirstStart 62.93
297 TestStartStop/group/newest-cni/serial/DeployApp 0
298 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.8
299 TestStartStop/group/newest-cni/serial/Stop 11.11
300 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
301 TestStartStop/group/newest-cni/serial/SecondStart 54.32
302 TestNetworkPlugins/group/auto/Start 109.21
303 TestNetworkPlugins/group/kindnet/Start 82.11
304 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
307 TestStartStop/group/newest-cni/serial/Pause 3.19
308 TestNetworkPlugins/group/calico/Start 110.84
309 TestNetworkPlugins/group/auto/KubeletFlags 0.21
310 TestNetworkPlugins/group/auto/NetCatPod 11.44
311 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
313 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
314 TestNetworkPlugins/group/auto/DNS 0.2
315 TestNetworkPlugins/group/auto/Localhost 0.21
316 TestNetworkPlugins/group/auto/HairPin 0.21
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.87
319 TestNetworkPlugins/group/custom-flannel/Start 92
320 TestNetworkPlugins/group/kindnet/DNS 0.22
321 TestNetworkPlugins/group/kindnet/Localhost 0.19
322 TestNetworkPlugins/group/kindnet/HairPin 0.18
323 TestNetworkPlugins/group/enable-default-cni/Start 123.93
324 TestNetworkPlugins/group/flannel/Start 128.36
325 TestNetworkPlugins/group/calico/ControllerPod 5.03
326 TestNetworkPlugins/group/calico/KubeletFlags 0.23
327 TestNetworkPlugins/group/calico/NetCatPod 12.46
328 TestNetworkPlugins/group/calico/DNS 0.19
329 TestNetworkPlugins/group/calico/Localhost 0.17
330 TestNetworkPlugins/group/calico/HairPin 0.16
331 TestNetworkPlugins/group/bridge/Start 123.22
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.51
334 TestNetworkPlugins/group/custom-flannel/DNS 0.27
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.41
339 TestNetworkPlugins/group/flannel/ControllerPod 5.02
340 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
341 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
342 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
344 TestNetworkPlugins/group/flannel/NetCatPod 11.37
345 TestNetworkPlugins/group/flannel/DNS 0.19
346 TestNetworkPlugins/group/flannel/Localhost 0.19
347 TestNetworkPlugins/group/flannel/HairPin 0.16
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
349 TestNetworkPlugins/group/bridge/NetCatPod 12.45
350 TestNetworkPlugins/group/bridge/DNS 0.17
351 TestNetworkPlugins/group/bridge/Localhost 0.14
352 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (44.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-435150 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-435150 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (44.755635974s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (44.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-435150
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-435150: exit status 85 (55.087188ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |          |
	|         | -p download-only-435150        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:37:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:37:41.344816   13669 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:37:41.344920   13669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:37:41.344927   13669 out.go:309] Setting ErrFile to fd 2...
	I0906 23:37:41.344932   13669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:37:41.345113   13669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	W0906 23:37:41.345226   13669 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17174-6470/.minikube/config/config.json: open /home/jenkins/minikube-integration/17174-6470/.minikube/config/config.json: no such file or directory
	I0906 23:37:41.345803   13669 out.go:303] Setting JSON to true
	I0906 23:37:41.346638   13669 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1206,"bootTime":1694042256,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:37:41.346689   13669 start.go:138] virtualization: kvm guest
	I0906 23:37:41.349292   13669 out.go:97] [download-only-435150] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:37:41.350904   13669 out.go:169] MINIKUBE_LOCATION=17174
	I0906 23:37:41.349420   13669 notify.go:220] Checking for updates...
	W0906 23:37:41.349455   13669 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 23:37:41.353639   13669 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:37:41.355068   13669 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:37:41.356409   13669 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:37:41.358022   13669 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 23:37:41.361014   13669 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 23:37:41.361255   13669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:37:41.472534   13669 out.go:97] Using the kvm2 driver based on user configuration
	I0906 23:37:41.472572   13669 start.go:298] selected driver: kvm2
	I0906 23:37:41.472578   13669 start.go:902] validating driver "kvm2" against <nil>
	I0906 23:37:41.472904   13669 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:37:41.473020   13669 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:37:41.487211   13669 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:37:41.487262   13669 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 23:37:41.487730   13669 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0906 23:37:41.487889   13669 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 23:37:41.487920   13669 cni.go:84] Creating CNI manager for ""
	I0906 23:37:41.487931   13669 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:37:41.487938   13669 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:37:41.487944   13669 start_flags.go:321] config:
	{Name:download-only-435150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:37:41.488127   13669 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:37:41.490153   13669 out.go:97] Downloading VM boot image ...
	I0906 23:37:41.490182   13669 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0906 23:37:50.526919   13669 out.go:97] Starting control plane node download-only-435150 in cluster download-only-435150
	I0906 23:37:50.526937   13669 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 23:37:50.638285   13669 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0906 23:37:50.638320   13669 cache.go:57] Caching tarball of preloaded images
	I0906 23:37:50.638489   13669 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 23:37:50.640536   13669 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 23:37:50.640557   13669 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:37:50.756129   13669 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0906 23:38:03.567342   13669 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:38:03.567428   13669 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:38:04.423861   13669 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0906 23:38:04.424166   13669 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/download-only-435150/config.json ...
	I0906 23:38:04.424194   13669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/download-only-435150/config.json: {Name:mke8c88f265474867c8698d1ec9dfdd1220b1cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:38:04.424375   13669 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 23:38:04.424569   13669 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435150"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (16.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-435150 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-435150 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.211772437s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (16.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-435150
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-435150: exit status 85 (56.27746ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |          |
	|         | -p download-only-435150        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-435150 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |          |
	|         | -p download-only-435150        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:38:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:38:26.158437   13821 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:38:26.158572   13821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:26.158581   13821 out.go:309] Setting ErrFile to fd 2...
	I0906 23:38:26.158587   13821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:26.158801   13821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	W0906 23:38:26.158907   13821 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17174-6470/.minikube/config/config.json: open /home/jenkins/minikube-integration/17174-6470/.minikube/config/config.json: no such file or directory
	I0906 23:38:26.159305   13821 out.go:303] Setting JSON to true
	I0906 23:38:26.160030   13821 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1250,"bootTime":1694042256,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:38:26.160086   13821 start.go:138] virtualization: kvm guest
	I0906 23:38:26.162262   13821 out.go:97] [download-only-435150] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:38:26.163905   13821 out.go:169] MINIKUBE_LOCATION=17174
	I0906 23:38:26.162411   13821 notify.go:220] Checking for updates...
	I0906 23:38:26.166645   13821 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:38:26.168012   13821 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:38:26.169304   13821 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:38:26.170933   13821 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 23:38:26.173981   13821 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 23:38:26.174343   13821 config.go:182] Loaded profile config "download-only-435150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0906 23:38:26.174379   13821 start.go:810] api.Load failed for download-only-435150: filestore "download-only-435150": Docker machine "download-only-435150" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 23:38:26.174447   13821 driver.go:373] Setting default libvirt URI to qemu:///system
	W0906 23:38:26.174472   13821 start.go:810] api.Load failed for download-only-435150: filestore "download-only-435150": Docker machine "download-only-435150" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 23:38:26.205009   13821 out.go:97] Using the kvm2 driver based on existing profile
	I0906 23:38:26.205045   13821 start.go:298] selected driver: kvm2
	I0906 23:38:26.205049   13821 start.go:902] validating driver "kvm2" against &{Name:download-only-435150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:26.205427   13821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:26.205494   13821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6470/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:38:26.219418   13821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:38:26.220093   13821 cni.go:84] Creating CNI manager for ""
	I0906 23:38:26.220105   13821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:38:26.220117   13821 start_flags.go:321] config:
	{Name:download-only-435150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-435150 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:26.220258   13821 iso.go:125] acquiring lock: {Name:mkaa5ff42ec8226894cd395db53648415ea38dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:26.221883   13821 out.go:97] Starting control plane node download-only-435150 in cluster download-only-435150
	I0906 23:38:26.221895   13821 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 23:38:26.728528   13821 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0906 23:38:26.728597   13821 cache.go:57] Caching tarball of preloaded images
	I0906 23:38:26.728780   13821 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 23:38:26.730751   13821 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0906 23:38:26.730764   13821 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:38:26.845689   13821 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:7b00bd3467481f38e4a66499519b2cca -> /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0906 23:38:40.342232   13821 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:38:40.342322   13821 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17174-6470/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435150"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-435150
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-605415 --alsologtostderr --binary-mirror http://127.0.0.1:45217 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-605415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-605415
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (103.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-315234 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-315234 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.117080424s)
helpers_test.go:175: Cleaning up "offline-crio-315234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-315234
--- PASS: TestOffline (103.12s)

                                                
                                    
x
+
TestAddons/Setup (154.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-503456 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-503456 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m34.288474154s)
--- PASS: TestAddons/Setup (154.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 26.618168ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wtw27" [eeb1866f-e448-437f-b333-3d93f770b680] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.024908515s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-smcjh" [54d212ef-6349-4b44-99f7-bc51cb724809] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.034974711s
addons_test.go:316: (dbg) Run:  kubectl --context addons-503456 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-503456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-503456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.08881524s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 ip
2023/09/06 23:41:33 [DEBUG] GET http://192.168.39.156:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable registry --alsologtostderr -v=1
addons_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 addons disable registry --alsologtostderr -v=1: (1.289603794s)
--- PASS: TestAddons/parallel/Registry (17.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r4dlq" [98883238-5bfe-41c8-a3ed-a4c5a9c0c40b] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.02148306s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-503456
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-503456: (6.940724881s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 27.878546ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-4v28l" [628112ae-73d6-4779-a757-b6197698e1d5] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01939772s
addons_test.go:391: (dbg) Run:  kubectl --context addons-503456 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 addons disable metrics-server --alsologtostderr -v=1: (1.605317933s)
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 10.702012ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7ns7n" [1fcb101f-c09b-4237-be12-23fbd6b68cda] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.022670063s
addons_test.go:449: (dbg) Run:  kubectl --context addons-503456 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-503456 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.17027441s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (80.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.367556ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.226861073s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ccd50a9e-cdd5-41a2-b676-7b1eb223128b] Pending
helpers_test.go:344: "task-pv-pod" [ccd50a9e-cdd5-41a2-b676-7b1eb223128b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ccd50a9e-cdd5-41a2-b676-7b1eb223128b] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.020799s
addons_test.go:560: (dbg) Run:  kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-503456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-503456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-503456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-503456 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-503456 delete pod task-pv-pod: (1.10567813s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-503456 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-503456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-503456 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [10daf5d3-54e1-442a-9361-2fcdf67ab0d2] Pending
helpers_test.go:344: "task-pv-pod-restore" [10daf5d3-54e1-442a-9361-2fcdf67ab0d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [10daf5d3-54e1-442a-9361-2fcdf67ab0d2] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.021704988s
addons_test.go:602: (dbg) Run:  kubectl --context addons-503456 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-503456 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-503456 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-503456 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.815062316s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-503456 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (80.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-503456 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-503456 --alsologtostderr -v=1: (1.581416564s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-8bfx2" [74712abb-a7f5-4f24-9c48-90a8918e78bb] Pending
helpers_test.go:344: "headlamp-699c48fb74-8bfx2" [74712abb-a7f5-4f24-9c48-90a8918e78bb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-8bfx2" [74712abb-a7f5-4f24-9c48-90a8918e78bb] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-8bfx2" [74712abb-a7f5-4f24-9c48-90a8918e78bb] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.03701689s
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-pcb9t" [f6b34caf-3b3a-43f8-9aa9-fff677a9e709] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.020262945s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-503456
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-503456: (1.084944076s)
--- PASS: TestAddons/parallel/CloudSpanner (6.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-503456 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-503456 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (51.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-818054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-818054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.384025987s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-818054 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-818054 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-818054 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-818054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-818054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-818054: (1.003023779s)
--- PASS: TestCertOptions (51.84s)

                                                
                                    
x
+
TestCertExpiration (277.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-386196 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-386196 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (54.072059572s)
E0907 00:39:02.117205   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-386196 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-386196 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.614872264s)
helpers_test.go:175: Cleaning up "cert-expiration-386196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-386196
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-386196: (1.054623447s)
--- PASS: TestCertExpiration (277.74s)

                                                
                                    
x
+
TestForceSystemdFlag (83.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-949073 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-949073 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.186123047s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-949073 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-949073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-949073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-949073: (1.06380216s)
--- PASS: TestForceSystemdFlag (83.47s)

                                                
                                    
x
+
TestForceSystemdEnv (79.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-347596 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-347596 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.408768823s)
helpers_test.go:175: Cleaning up "force-systemd-env-347596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-347596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-347596: (1.00669505s)
--- PASS: TestForceSystemdEnv (79.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.98s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.98s)

                                                
                                    
x
+
TestErrorSpam/setup (48.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-180546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-180546 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-180546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-180546 --driver=kvm2  --container-runtime=crio: (48.401029898s)
--- PASS: TestErrorSpam/setup (48.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 stop: (2.074875016s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180546 --log_dir /tmp/nospam-180546 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17174-6470/.minikube/files/etc/test/nested/copy/13657/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-000295 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.180351063s)
--- PASS: TestFunctional/serial/StartWithProxy (99.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-000295 --alsologtostderr -v=8: (53.322762427s)
functional_test.go:659: soft start took 53.323394282s for "functional-000295" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-000295 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-000295 /tmp/TestFunctionalserialCacheCmdcacheadd_local735107111/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache add minikube-local-cache-test:functional-000295
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 cache add minikube-local-cache-test:functional-000295: (1.913776742s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache delete minikube-local-cache-test:functional-000295
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-000295
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.188656ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 kubectl -- --context functional-000295 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-000295 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-000295 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.434781283s)
functional_test.go:757: restart took 35.434886651s for "functional-000295" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-000295 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 logs
E0906 23:51:17.593283   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:17.599188   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:17.609367   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:17.629956   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:17.670722   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:17.751223   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 logs: (1.33695928s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 logs --file /tmp/TestFunctionalserialLogsFileCmd2759978932/001/logs.txt
E0906 23:51:17.911475   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:18.231970   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0906 23:51:18.872317   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 logs --file /tmp/TestFunctionalserialLogsFileCmd2759978932/001/logs.txt: (1.34749302s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-000295 apply -f testdata/invalidsvc.yaml
E0906 23:51:20.153293   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-000295
E0906 23:51:22.713495   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-000295: exit status 115 (274.547131ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.159:31340 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-000295 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-000295 delete -f testdata/invalidsvc.yaml: (1.319194518s)
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 config get cpus: exit status 14 (47.559938ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 config get cpus: exit status 14 (39.169181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-000295 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-000295 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21541: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-000295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.959846ms)

                                                
                                                
-- stdout --
	* [functional-000295] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:51:49.686662   21131 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:51:49.686771   21131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:51:49.686795   21131 out.go:309] Setting ErrFile to fd 2...
	I0906 23:51:49.686802   21131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:51:49.687030   21131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0906 23:51:49.687568   21131 out.go:303] Setting JSON to false
	I0906 23:51:49.688543   21131 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2054,"bootTime":1694042256,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:51:49.688598   21131 start.go:138] virtualization: kvm guest
	I0906 23:51:49.691021   21131 out.go:177] * [functional-000295] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:51:49.693658   21131 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:51:49.693662   21131 notify.go:220] Checking for updates...
	I0906 23:51:49.695388   21131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:51:49.697279   21131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:51:49.699326   21131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:51:49.703116   21131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:51:49.704742   21131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:51:49.706761   21131 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 23:51:49.707317   21131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:51:49.707382   21131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:51:49.727904   21131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0906 23:51:49.728274   21131 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:51:49.728891   21131 main.go:141] libmachine: Using API Version  1
	I0906 23:51:49.728916   21131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:51:49.729284   21131 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:51:49.729521   21131 main.go:141] libmachine: (functional-000295) Calling .DriverName
	I0906 23:51:49.729829   21131 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:51:49.730185   21131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:51:49.730228   21131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:51:49.744864   21131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0906 23:51:49.745229   21131 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:51:49.745685   21131 main.go:141] libmachine: Using API Version  1
	I0906 23:51:49.745713   21131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:51:49.746065   21131 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:51:49.746273   21131 main.go:141] libmachine: (functional-000295) Calling .DriverName
	I0906 23:51:49.780834   21131 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 23:51:49.782217   21131 start.go:298] selected driver: kvm2
	I0906 23:51:49.782233   21131 start.go:902] validating driver "kvm2" against &{Name:functional-000295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-000295 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:51:49.782358   21131 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:51:49.784810   21131 out.go:177] 
	W0906 23:51:49.786129   21131 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 23:51:49.787447   21131 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-000295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.82043ms)

                                                
                                                
-- stdout --
	* [functional-000295] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:51:39.440030   20311 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:51:39.440138   20311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:51:39.440146   20311 out.go:309] Setting ErrFile to fd 2...
	I0906 23:51:39.440150   20311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:51:39.440397   20311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0906 23:51:39.440898   20311 out.go:303] Setting JSON to false
	I0906 23:51:39.441718   20311 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2044,"bootTime":1694042256,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:51:39.441776   20311 start.go:138] virtualization: kvm guest
	I0906 23:51:39.444100   20311 out.go:177] * [functional-000295] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0906 23:51:39.445559   20311 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:51:39.445613   20311 notify.go:220] Checking for updates...
	I0906 23:51:39.447230   20311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:51:39.449083   20311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0906 23:51:39.451521   20311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0906 23:51:39.453294   20311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:51:39.454903   20311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:51:39.456893   20311 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 23:51:39.457258   20311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:51:39.457315   20311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:51:39.476600   20311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0906 23:51:39.477094   20311 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:51:39.477692   20311 main.go:141] libmachine: Using API Version  1
	I0906 23:51:39.477714   20311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:51:39.478096   20311 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:51:39.478270   20311 main.go:141] libmachine: (functional-000295) Calling .DriverName
	I0906 23:51:39.478537   20311 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:51:39.478977   20311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 23:51:39.479036   20311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:51:39.493209   20311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0906 23:51:39.493668   20311 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:51:39.494176   20311 main.go:141] libmachine: Using API Version  1
	I0906 23:51:39.494197   20311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:51:39.494606   20311 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:51:39.494919   20311 main.go:141] libmachine: (functional-000295) Calling .DriverName
	I0906 23:51:39.527927   20311 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0906 23:51:39.529705   20311 start.go:298] selected driver: kvm2
	I0906 23:51:39.529720   20311 start.go:902] validating driver "kvm2" against &{Name:functional-000295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-000295 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:51:39.529839   20311 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:51:39.532267   20311 out.go:177] 
	W0906 23:51:39.533842   20311 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 23:51:39.535407   20311 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-000295 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-000295 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-t5c8v" [cd786ef3-0032-4782-aa43-d289c120eaf3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-t5c8v" [cd786ef3-0032-4782-aa43-d289c120eaf3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.017437058s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.159:32355
functional_test.go:1674: http://192.168.39.159:32355: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-t5c8v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.159:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.159:32355
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [754eb0f2-709b-47bb-a769-2d4f8ee31301] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023062311s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-000295 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-000295 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000295 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000295 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000295 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [247a774d-e523-40d3-aada-1d44d6387738] Pending
helpers_test.go:344: "sp-pod" [247a774d-e523-40d3-aada-1d44d6387738] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [247a774d-e523-40d3-aada-1d44d6387738] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.036062963s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-000295 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-000295 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-000295 delete -f testdata/storage-provisioner/pod.yaml: (1.629117173s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000295 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1dafb0b4-8e3c-4e56-8b8c-099924274242] Pending
helpers_test.go:344: "sp-pod" [1dafb0b4-8e3c-4e56-8b8c-099924274242] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1dafb0b4-8e3c-4e56-8b8c-099924274242] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.043506854s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-000295 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh -n functional-000295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 cp functional-000295:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd971669504/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh -n functional-000295 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-000295 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-s5qqp" [37305593-a99d-4de5-82c5-f87d0725884b] Pending
helpers_test.go:344: "mysql-859648c796-s5qqp" [37305593-a99d-4de5-82c5-f87d0725884b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-s5qqp" [37305593-a99d-4de5-82c5-f87d0725884b] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.053514878s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-000295 exec mysql-859648c796-s5qqp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-000295 exec mysql-859648c796-s5qqp -- mysql -ppassword -e "show databases;": exit status 1 (187.642519ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-000295 exec mysql-859648c796-s5qqp -- mysql -ppassword -e "show databases;"
2023/09/06 23:52:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (33.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13657/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /etc/test/nested/copy/13657/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13657.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /etc/ssl/certs/13657.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13657.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /usr/share/ca-certificates/13657.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/136572.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /etc/ssl/certs/136572.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/136572.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /usr/share/ca-certificates/136572.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-000295 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "sudo systemctl is-active docker": exit status 1 (206.799383ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "sudo systemctl is-active containerd": exit status 1 (206.319326ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 version -o=json --components: (1.324752522s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000295 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-000295
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-000295
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000295 image ls --format short --alsologtostderr:
I0906 23:51:56.197776   21764 out.go:296] Setting OutFile to fd 1 ...
I0906 23:51:56.197907   21764 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:56.197920   21764 out.go:309] Setting ErrFile to fd 2...
I0906 23:51:56.197927   21764 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:56.198225   21764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
I0906 23:51:56.199025   21764 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:56.199169   21764 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:56.199688   21764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:56.199759   21764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:56.214961   21764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
I0906 23:51:56.215398   21764 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:56.215974   21764 main.go:141] libmachine: Using API Version  1
I0906 23:51:56.216003   21764 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:56.216384   21764 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:56.216550   21764 main.go:141] libmachine: (functional-000295) Calling .GetState
I0906 23:51:56.218332   21764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:56.218379   21764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:56.232963   21764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
I0906 23:51:56.233396   21764 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:56.233958   21764 main.go:141] libmachine: Using API Version  1
I0906 23:51:56.233977   21764 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:56.234324   21764 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:56.234516   21764 main.go:141] libmachine: (functional-000295) Calling .DriverName
I0906 23:51:56.234773   21764 ssh_runner.go:195] Run: systemctl --version
I0906 23:51:56.234828   21764 main.go:141] libmachine: (functional-000295) Calling .GetSSHHostname
I0906 23:51:56.237715   21764 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:56.238128   21764 main.go:141] libmachine: (functional-000295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:39:c0", ip: ""} in network mk-functional-000295: {Iface:virbr1 ExpiryTime:2023-09-07 00:48:17 +0000 UTC Type:0 Mac:52:54:00:73:39:c0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-000295 Clientid:01:52:54:00:73:39:c0}
I0906 23:51:56.238165   21764 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined IP address 192.168.39.159 and MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:56.238326   21764 main.go:141] libmachine: (functional-000295) Calling .GetSSHPort
I0906 23:51:56.238494   21764 main.go:141] libmachine: (functional-000295) Calling .GetSSHKeyPath
I0906 23:51:56.238657   21764 main.go:141] libmachine: (functional-000295) Calling .GetSSHUsername
I0906 23:51:56.238825   21764 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/functional-000295/id_rsa Username:docker}
I0906 23:51:56.378688   21764 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:51:56.484531   21764 main.go:141] libmachine: Making call to close driver server
I0906 23:51:56.484595   21764 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:51:56.484927   21764 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:51:56.484946   21764 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:51:56.484969   21764 main.go:141] libmachine: Making call to close driver server
I0906 23:51:56.484979   21764 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:51:56.485228   21764 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
I0906 23:51:56.485248   21764 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:51:56.485258   21764 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000295 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | eea7b3dcba7ee | 191MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-000295  | 8e3bf24c340c0 | 3.35kB |
| localhost/my-image                      | functional-000295  | 967f479cea930 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/google-containers/addon-resizer  | functional-000295  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000295 image ls --format table --alsologtostderr:
I0906 23:52:09.269487   21934 out.go:296] Setting OutFile to fd 1 ...
I0906 23:52:09.269678   21934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:52:09.269688   21934 out.go:309] Setting ErrFile to fd 2...
I0906 23:52:09.269695   21934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:52:09.269894   21934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
I0906 23:52:09.270423   21934 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:52:09.270536   21934 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:52:09.270894   21934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:52:09.270949   21934 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:52:09.284671   21934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
I0906 23:52:09.285071   21934 main.go:141] libmachine: () Calling .GetVersion
I0906 23:52:09.285607   21934 main.go:141] libmachine: Using API Version  1
I0906 23:52:09.285630   21934 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:52:09.285939   21934 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:52:09.286085   21934 main.go:141] libmachine: (functional-000295) Calling .GetState
I0906 23:52:09.287762   21934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:52:09.287795   21934 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:52:09.301010   21934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
I0906 23:52:09.301352   21934 main.go:141] libmachine: () Calling .GetVersion
I0906 23:52:09.301735   21934 main.go:141] libmachine: Using API Version  1
I0906 23:52:09.301752   21934 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:52:09.302084   21934 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:52:09.302251   21934 main.go:141] libmachine: (functional-000295) Calling .DriverName
I0906 23:52:09.302457   21934 ssh_runner.go:195] Run: systemctl --version
I0906 23:52:09.302477   21934 main.go:141] libmachine: (functional-000295) Calling .GetSSHHostname
I0906 23:52:09.304873   21934 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:52:09.305276   21934 main.go:141] libmachine: (functional-000295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:39:c0", ip: ""} in network mk-functional-000295: {Iface:virbr1 ExpiryTime:2023-09-07 00:48:17 +0000 UTC Type:0 Mac:52:54:00:73:39:c0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-000295 Clientid:01:52:54:00:73:39:c0}
I0906 23:52:09.305305   21934 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined IP address 192.168.39.159 and MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:52:09.305372   21934 main.go:141] libmachine: (functional-000295) Calling .GetSSHPort
I0906 23:52:09.305564   21934 main.go:141] libmachine: (functional-000295) Calling .GetSSHKeyPath
I0906 23:52:09.305714   21934 main.go:141] libmachine: (functional-000295) Calling .GetSSHUsername
I0906 23:52:09.305849   21934 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/functional-000295/id_rsa Username:docker}
I0906 23:52:09.389282   21934 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:52:09.423356   21934 main.go:141] libmachine: Making call to close driver server
I0906 23:52:09.423374   21934 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:09.423671   21934 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:09.423723   21934 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
I0906 23:52:09.423746   21934 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:52:09.423757   21934 main.go:141] libmachine: Making call to close driver server
I0906 23:52:09.423765   21934 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:09.423966   21934 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:09.423988   21934 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:52:09.424001   21934 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000295 image ls --format json --alsologtostderr:
[{"id":"8e3bf24c340c0cb69e1a638e4d175ccccbf81fb9cc58c91d6864280afc69bcc0","repoDigests":["localhost/minikube-local-cache-test@sha256:c7827cf5a003366cca179871c741559c4e3b3e3d2a1b80e1b05596dc3a7e161d"],"repoTags":["localhost/minikube-local-cache-test:functional-000295"],"size":"3345"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa
8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"c8142231aaded728eb3012eacd0193b15e59830773f268e8480989a0eb93290a","repoDigests":["docker.io/library/c30a410af7d186c56f307267f1708c71e5733011bc492f8da4bb006faec818df-tmp@sha256:ea687cc04f2942aa890de468e46776d8a2
d7c504ae6555283c0ac730086863b4"],"repoTags":[],"size":"1466018"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c","docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820092"},{"id":"967f479cea930dc46ceaaa4a9aeed1626784e92f9f5c8b7bb190cacb83812a5c","repoDigests":["localhost/my-image@sha256:b64240af76d0f589fed503ccba4c130235c9a920973e73cc902d21498813e24e"],"repoTags":["localhost/my-image:functional-000295"],"size":"1468599"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"si
ze":"126972880"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0
f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"ffd4cfbbe753e62419e1
29ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-000295"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"6cdbabde3
874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000295 image ls --format json --alsologtostderr:
I0906 23:52:09.068149   21910 out.go:296] Setting OutFile to fd 1 ...
I0906 23:52:09.068274   21910 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:52:09.068282   21910 out.go:309] Setting ErrFile to fd 2...
I0906 23:52:09.068286   21910 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:52:09.068467   21910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
I0906 23:52:09.068989   21910 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:52:09.069077   21910 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:52:09.069389   21910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:52:09.069434   21910 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:52:09.084645   21910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
I0906 23:52:09.085095   21910 main.go:141] libmachine: () Calling .GetVersion
I0906 23:52:09.085784   21910 main.go:141] libmachine: Using API Version  1
I0906 23:52:09.085808   21910 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:52:09.086135   21910 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:52:09.086317   21910 main.go:141] libmachine: (functional-000295) Calling .GetState
I0906 23:52:09.087942   21910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:52:09.087981   21910 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:52:09.102853   21910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
I0906 23:52:09.103323   21910 main.go:141] libmachine: () Calling .GetVersion
I0906 23:52:09.103775   21910 main.go:141] libmachine: Using API Version  1
I0906 23:52:09.103797   21910 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:52:09.104123   21910 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:52:09.104283   21910 main.go:141] libmachine: (functional-000295) Calling .DriverName
I0906 23:52:09.104474   21910 ssh_runner.go:195] Run: systemctl --version
I0906 23:52:09.104498   21910 main.go:141] libmachine: (functional-000295) Calling .GetSSHHostname
I0906 23:52:09.107373   21910 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:52:09.107787   21910 main.go:141] libmachine: (functional-000295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:39:c0", ip: ""} in network mk-functional-000295: {Iface:virbr1 ExpiryTime:2023-09-07 00:48:17 +0000 UTC Type:0 Mac:52:54:00:73:39:c0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-000295 Clientid:01:52:54:00:73:39:c0}
I0906 23:52:09.107819   21910 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined IP address 192.168.39.159 and MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:52:09.107966   21910 main.go:141] libmachine: (functional-000295) Calling .GetSSHPort
I0906 23:52:09.108129   21910 main.go:141] libmachine: (functional-000295) Calling .GetSSHKeyPath
I0906 23:52:09.108321   21910 main.go:141] libmachine: (functional-000295) Calling .GetSSHUsername
I0906 23:52:09.108422   21910 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/functional-000295/id_rsa Username:docker}
I0906 23:52:09.193452   21910 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:52:09.224912   21910 main.go:141] libmachine: Making call to close driver server
I0906 23:52:09.224927   21910 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:09.225208   21910 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:09.225239   21910 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:52:09.225238   21910 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
I0906 23:52:09.225258   21910 main.go:141] libmachine: Making call to close driver server
I0906 23:52:09.225268   21910 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:09.225494   21910 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:09.225510   21910 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000295 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8e3bf24c340c0cb69e1a638e4d175ccccbf81fb9cc58c91d6864280afc69bcc0
repoDigests:
- localhost/minikube-local-cache-test@sha256:c7827cf5a003366cca179871c741559c4e3b3e3d2a1b80e1b05596dc3a7e161d
repoTags:
- localhost/minikube-local-cache-test:functional-000295
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
- docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35
repoTags:
- docker.io/library/nginx:latest
size: "190820092"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-000295
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000295 image ls --format yaml --alsologtostderr:
I0906 23:51:56.528968   21787 out.go:296] Setting OutFile to fd 1 ...
I0906 23:51:56.529091   21787 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:56.529099   21787 out.go:309] Setting ErrFile to fd 2...
I0906 23:51:56.529103   21787 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:56.529299   21787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
I0906 23:51:56.529834   21787 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:56.529920   21787 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:56.530222   21787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:56.530264   21787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:56.544327   21787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32955
I0906 23:51:56.544771   21787 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:56.545347   21787 main.go:141] libmachine: Using API Version  1
I0906 23:51:56.545375   21787 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:56.545701   21787 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:56.545904   21787 main.go:141] libmachine: (functional-000295) Calling .GetState
I0906 23:51:56.547753   21787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:56.547798   21787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:56.561798   21787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
I0906 23:51:56.562219   21787 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:56.562695   21787 main.go:141] libmachine: Using API Version  1
I0906 23:51:56.562717   21787 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:56.563101   21787 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:56.563290   21787 main.go:141] libmachine: (functional-000295) Calling .DriverName
I0906 23:51:56.563491   21787 ssh_runner.go:195] Run: systemctl --version
I0906 23:51:56.563514   21787 main.go:141] libmachine: (functional-000295) Calling .GetSSHHostname
I0906 23:51:56.565926   21787 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:56.566308   21787 main.go:141] libmachine: (functional-000295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:39:c0", ip: ""} in network mk-functional-000295: {Iface:virbr1 ExpiryTime:2023-09-07 00:48:17 +0000 UTC Type:0 Mac:52:54:00:73:39:c0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-000295 Clientid:01:52:54:00:73:39:c0}
I0906 23:51:56.566343   21787 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined IP address 192.168.39.159 and MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:56.566426   21787 main.go:141] libmachine: (functional-000295) Calling .GetSSHPort
I0906 23:51:56.566615   21787 main.go:141] libmachine: (functional-000295) Calling .GetSSHKeyPath
I0906 23:51:56.566746   21787 main.go:141] libmachine: (functional-000295) Calling .GetSSHUsername
I0906 23:51:56.566865   21787 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/functional-000295/id_rsa Username:docker}
I0906 23:51:56.723154   21787 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:51:56.853577   21787 main.go:141] libmachine: Making call to close driver server
I0906 23:51:56.853595   21787 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:51:56.853869   21787 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:51:56.853890   21787 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:51:56.853900   21787 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
I0906 23:51:56.853912   21787 main.go:141] libmachine: Making call to close driver server
I0906 23:51:56.853941   21787 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:51:56.854172   21787 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:51:56.854195   21787 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh pgrep buildkitd: exit status 1 (223.58034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image build -t localhost/my-image:functional-000295 testdata/build --alsologtostderr
E0906 23:51:58.555545   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image build -t localhost/my-image:functional-000295 testdata/build --alsologtostderr: (11.744770102s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000295 image build -t localhost/my-image:functional-000295 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c8142231aad
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-000295
--> 967f479cea9
Successfully tagged localhost/my-image:functional-000295
967f479cea930dc46ceaaa4a9aeed1626784e92f9f5c8b7bb190cacb83812a5c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000295 image build -t localhost/my-image:functional-000295 testdata/build --alsologtostderr:
I0906 23:51:57.133683   21840 out.go:296] Setting OutFile to fd 1 ...
I0906 23:51:57.133848   21840 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:57.133858   21840 out.go:309] Setting ErrFile to fd 2...
I0906 23:51:57.133865   21840 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:51:57.134162   21840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
I0906 23:51:57.134935   21840 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:57.135552   21840 config.go:182] Loaded profile config "functional-000295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 23:51:57.135920   21840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:57.135975   21840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:57.150294   21840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
I0906 23:51:57.150710   21840 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:57.151300   21840 main.go:141] libmachine: Using API Version  1
I0906 23:51:57.151327   21840 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:57.151694   21840 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:57.151864   21840 main.go:141] libmachine: (functional-000295) Calling .GetState
I0906 23:51:57.153859   21840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 23:51:57.153908   21840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:51:57.168418   21840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
I0906 23:51:57.168782   21840 main.go:141] libmachine: () Calling .GetVersion
I0906 23:51:57.169274   21840 main.go:141] libmachine: Using API Version  1
I0906 23:51:57.169296   21840 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:51:57.169608   21840 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:51:57.169828   21840 main.go:141] libmachine: (functional-000295) Calling .DriverName
I0906 23:51:57.170069   21840 ssh_runner.go:195] Run: systemctl --version
I0906 23:51:57.170101   21840 main.go:141] libmachine: (functional-000295) Calling .GetSSHHostname
I0906 23:51:57.172728   21840 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:57.173086   21840 main.go:141] libmachine: (functional-000295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:39:c0", ip: ""} in network mk-functional-000295: {Iface:virbr1 ExpiryTime:2023-09-07 00:48:17 +0000 UTC Type:0 Mac:52:54:00:73:39:c0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-000295 Clientid:01:52:54:00:73:39:c0}
I0906 23:51:57.173118   21840 main.go:141] libmachine: (functional-000295) DBG | domain functional-000295 has defined IP address 192.168.39.159 and MAC address 52:54:00:73:39:c0 in network mk-functional-000295
I0906 23:51:57.173398   21840 main.go:141] libmachine: (functional-000295) Calling .GetSSHPort
I0906 23:51:57.173577   21840 main.go:141] libmachine: (functional-000295) Calling .GetSSHKeyPath
I0906 23:51:57.173749   21840 main.go:141] libmachine: (functional-000295) Calling .GetSSHUsername
I0906 23:51:57.173875   21840 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/functional-000295/id_rsa Username:docker}
I0906 23:51:57.307479   21840 build_images.go:151] Building image from path: /tmp/build.4293354775.tar
I0906 23:51:57.307543   21840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 23:51:57.332074   21840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4293354775.tar
I0906 23:51:57.346632   21840 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4293354775.tar: stat -c "%s %y" /var/lib/minikube/build/build.4293354775.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4293354775.tar': No such file or directory
I0906 23:51:57.346667   21840 ssh_runner.go:362] scp /tmp/build.4293354775.tar --> /var/lib/minikube/build/build.4293354775.tar (3072 bytes)
I0906 23:51:57.389790   21840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4293354775
I0906 23:51:57.411887   21840 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4293354775 -xf /var/lib/minikube/build/build.4293354775.tar
I0906 23:51:57.435704   21840 crio.go:297] Building image: /var/lib/minikube/build/build.4293354775
I0906 23:51:57.435765   21840 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-000295 /var/lib/minikube/build/build.4293354775 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0906 23:52:08.804630   21840 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-000295 /var/lib/minikube/build/build.4293354775 --cgroup-manager=cgroupfs: (11.368846221s)
I0906 23:52:08.804681   21840 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4293354775
I0906 23:52:08.815471   21840 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4293354775.tar
I0906 23:52:08.825040   21840 build_images.go:207] Built localhost/my-image:functional-000295 from /tmp/build.4293354775.tar
I0906 23:52:08.825073   21840 build_images.go:123] succeeded building to: functional-000295
I0906 23:52:08.825077   21840 build_images.go:124] failed building to: 
I0906 23:52:08.825133   21840 main.go:141] libmachine: Making call to close driver server
I0906 23:52:08.825147   21840 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:08.825469   21840 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:08.825482   21840 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
I0906 23:52:08.825492   21840 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:52:08.825508   21840 main.go:141] libmachine: Making call to close driver server
I0906 23:52:08.825521   21840 main.go:141] libmachine: (functional-000295) Calling .Close
I0906 23:52:08.825701   21840 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:52:08.825717   21840 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:52:08.825759   21840 main.go:141] libmachine: (functional-000295) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.004522379s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-000295
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "226.205779ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "42.063667ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "204.897227ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "41.694839ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-000295 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-000295 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-sgth2" [c32a124d-b441-49b1-bee5-4005f99d8d05] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-sgth2" [c32a124d-b441-49b1-bee5-4005f99d8d05] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.191909144s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr
E0906 23:51:27.834381   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr: (5.313123006s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr: (2.278035242s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.042252085s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-000295
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr
E0906 23:51:38.074691   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image load --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr: (6.937020849s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdany-port1203537463/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694044299539826844" to /tmp/TestFunctionalparallelMountCmdany-port1203537463/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694044299539826844" to /tmp/TestFunctionalparallelMountCmdany-port1203537463/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694044299539826844" to /tmp/TestFunctionalparallelMountCmdany-port1203537463/001/test-1694044299539826844
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.821886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 23:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 23:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 23:51 test-1694044299539826844
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh cat /mount-9p/test-1694044299539826844
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-000295 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d4d8653c-6a9d-4e5f-a6f3-bd75be777382] Pending
helpers_test.go:344: "busybox-mount" [d4d8653c-6a9d-4e5f-a6f3-bd75be777382] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d4d8653c-6a9d-4e5f-a6f3-bd75be777382] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d4d8653c-6a9d-4e5f-a6f3-bd75be777382] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.034923003s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-000295 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdany-port1203537463/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service list -o json
functional_test.go:1493: Took "359.781061ms" to run "out/minikube-linux-amd64 -p functional-000295 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.159:31859
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.159:31859
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image save gcr.io/google-containers/addon-resizer:functional-000295 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image save gcr.io/google-containers/addon-resizer:functional-000295 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.256981059s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image rm gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.852436264s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-000295
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 image save --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-000295 image save --daemon gcr.io/google-containers/addon-resizer:functional-000295 --alsologtostderr: (1.157422234s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-000295
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdspecific-port2856759200/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.17508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdspecific-port2856759200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "sudo umount -f /mount-9p": exit status 1 (250.666172ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-000295 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdspecific-port2856759200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T" /mount1: exit status 1 (310.430319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000295 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-000295 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4262293141/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-000295
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-000295
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-000295
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-474162 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0906 23:52:39.516143   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-474162 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.249119976s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons enable ingress --alsologtostderr -v=5
E0906 23:54:01.436996   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons enable ingress --alsologtostderr -v=5: (17.491200889s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-474162 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-375099 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0906 23:57:05.810244   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:57:46.771385   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-375099 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.483293865s)
--- PASS: TestJSONOutput/start/Command (101.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-375099 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-375099 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (92.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-375099 --output=json --user=testUser
E0906 23:59:02.117278   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.122557   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.132852   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.153146   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.193436   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.273756   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.434177   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:02.754749   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:03.395664   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:04.675981   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:07.236871   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:08.692612   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0906 23:59:12.357927   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:22.599030   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0906 23:59:43.079526   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-375099 --output=json --user=testUser: (1m32.166836869s)
--- PASS: TestJSONOutput/stop/Command (92.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-425048 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-425048 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.74078ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"634888b2-3f97-4bf7-a676-e7191dfbe177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-425048] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e94e97c-d93b-4362-a88f-075e3a0a3dcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17174"}}
	{"specversion":"1.0","id":"36c9a001-6221-4467-a119-cf1e6b5823d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c6535d1-7bb4-4dd3-9c17-dc1895c6735e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig"}}
	{"specversion":"1.0","id":"45342930-8844-499c-9da0-e1816abc0ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube"}}
	{"specversion":"1.0","id":"4e5b69f0-512e-4d2c-95a3-6342279f146d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b6b96a96-fc97-4142-b752-4c4c26a019f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aabb7adb-7110-44c5-b132-55d86f1cf35f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-425048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-425048
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (99.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-237750 --driver=kvm2  --container-runtime=crio
E0907 00:00:24.041029   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-237750 --driver=kvm2  --container-runtime=crio: (48.516888844s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-240731 --driver=kvm2  --container-runtime=crio
E0907 00:01:17.593884   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:01:24.847743   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:01:45.961478   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-240731 --driver=kvm2  --container-runtime=crio: (48.283239668s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-237750
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-240731
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-240731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-240731
helpers_test.go:175: Cleaning up "first-237750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-237750
--- PASS: TestMinikubeProfile (99.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-624661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0907 00:01:52.532803   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-624661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.180422981s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-624661 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-624661 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-643751 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-643751 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.168668266s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-624661 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-643751
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-643751: (1.211863164s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-643751
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-643751: (23.702025198s)
--- PASS: TestMountStart/serial/RestartStopped (24.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-643751 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-816061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0907 00:04:02.117222   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:04:29.801849   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-816061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.329012251s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-816061 -- rollout status deployment/busybox: (4.702081283s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-mq552 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-816061 -- exec busybox-5bc68d56bd-zvzjl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-816061 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-816061 -v 3 --alsologtostderr: (44.516494926s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --output json --alsologtostderr
E0907 00:06:17.592999   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp testdata/cp-test.txt multinode-816061:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3647011183/001/cp-test_multinode-816061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061:/home/docker/cp-test.txt multinode-816061-m02:/home/docker/cp-test_multinode-816061_multinode-816061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test_multinode-816061_multinode-816061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061:/home/docker/cp-test.txt multinode-816061-m03:/home/docker/cp-test_multinode-816061_multinode-816061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test_multinode-816061_multinode-816061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp testdata/cp-test.txt multinode-816061-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3647011183/001/cp-test_multinode-816061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt multinode-816061:/home/docker/cp-test_multinode-816061-m02_multinode-816061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test_multinode-816061-m02_multinode-816061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m02:/home/docker/cp-test.txt multinode-816061-m03:/home/docker/cp-test_multinode-816061-m02_multinode-816061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test_multinode-816061-m02_multinode-816061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp testdata/cp-test.txt multinode-816061-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3647011183/001/cp-test_multinode-816061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt multinode-816061:/home/docker/cp-test_multinode-816061-m03_multinode-816061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061 "sudo cat /home/docker/cp-test_multinode-816061-m03_multinode-816061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 cp multinode-816061-m03:/home/docker/cp-test.txt multinode-816061-m02:/home/docker/cp-test_multinode-816061-m03_multinode-816061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 ssh -n multinode-816061-m02 "sudo cat /home/docker/cp-test_multinode-816061-m03_multinode-816061-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 node stop m03
E0907 00:06:24.846411   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-816061 node stop m03: (1.382202068s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-816061 status: exit status 7 (436.304646ms)

                                                
                                                
-- stdout --
	multinode-816061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-816061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-816061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr: exit status 7 (446.385224ms)

                                                
                                                
-- stdout --
	multinode-816061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-816061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-816061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:06:26.535866   29138 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:06:26.536254   29138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:26.536267   29138 out.go:309] Setting ErrFile to fd 2...
	I0907 00:06:26.536274   29138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:26.536721   29138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:06:26.537013   29138 out.go:303] Setting JSON to false
	I0907 00:06:26.537045   29138 mustload.go:65] Loading cluster: multinode-816061
	I0907 00:06:26.537199   29138 notify.go:220] Checking for updates...
	I0907 00:06:26.537751   29138 config.go:182] Loaded profile config "multinode-816061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:06:26.537775   29138 status.go:255] checking status of multinode-816061 ...
	I0907 00:06:26.538154   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.538221   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.554469   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
	I0907 00:06:26.554887   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.555403   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.555425   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.555821   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.556042   29138 main.go:141] libmachine: (multinode-816061) Calling .GetState
	I0907 00:06:26.557699   29138 status.go:330] multinode-816061 host status = "Running" (err=<nil>)
	I0907 00:06:26.557718   29138 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:06:26.558125   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.558174   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.572962   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
	I0907 00:06:26.573372   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.573850   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.573871   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.574202   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.574382   29138 main.go:141] libmachine: (multinode-816061) Calling .GetIP
	I0907 00:06:26.577201   29138 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:06:26.577628   29138 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:06:26.577650   29138 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:06:26.577798   29138 host.go:66] Checking if "multinode-816061" exists ...
	I0907 00:06:26.578225   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.578273   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.593924   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45301
	I0907 00:06:26.594330   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.594821   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.594851   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.595189   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.595428   29138 main.go:141] libmachine: (multinode-816061) Calling .DriverName
	I0907 00:06:26.595602   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:06:26.595639   29138 main.go:141] libmachine: (multinode-816061) Calling .GetSSHHostname
	I0907 00:06:26.598384   29138 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:06:26.598836   29138 main.go:141] libmachine: (multinode-816061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:52:c5", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:03:34 +0000 UTC Type:0 Mac:52:54:00:ef:52:c5 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-816061 Clientid:01:52:54:00:ef:52:c5}
	I0907 00:06:26.598888   29138 main.go:141] libmachine: (multinode-816061) DBG | domain multinode-816061 has defined IP address 192.168.39.212 and MAC address 52:54:00:ef:52:c5 in network mk-multinode-816061
	I0907 00:06:26.599061   29138 main.go:141] libmachine: (multinode-816061) Calling .GetSSHPort
	I0907 00:06:26.599300   29138 main.go:141] libmachine: (multinode-816061) Calling .GetSSHKeyPath
	I0907 00:06:26.599475   29138 main.go:141] libmachine: (multinode-816061) Calling .GetSSHUsername
	I0907 00:06:26.599692   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061/id_rsa Username:docker}
	I0907 00:06:26.696084   29138 ssh_runner.go:195] Run: systemctl --version
	I0907 00:06:26.702724   29138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:06:26.718713   29138 kubeconfig.go:92] found "multinode-816061" server: "https://192.168.39.212:8443"
	I0907 00:06:26.718741   29138 api_server.go:166] Checking apiserver status ...
	I0907 00:06:26.718812   29138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:06:26.733349   29138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1070/cgroup
	I0907 00:06:26.745597   29138 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod17d9280f4f521ce2f8119c5c317f1d67/crio-02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac"
	I0907 00:06:26.745666   29138 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod17d9280f4f521ce2f8119c5c317f1d67/crio-02e80e012439df472c81397d2989f8ebc1392fe20f1560a27aecc9988f01b4ac/freezer.state
	I0907 00:06:26.758687   29138 api_server.go:204] freezer state: "THAWED"
	I0907 00:06:26.758709   29138 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0907 00:06:26.764313   29138 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0907 00:06:26.764332   29138 status.go:421] multinode-816061 apiserver status = Running (err=<nil>)
	I0907 00:06:26.764340   29138 status.go:257] multinode-816061 status: &{Name:multinode-816061 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:26.764354   29138 status.go:255] checking status of multinode-816061-m02 ...
	I0907 00:06:26.764643   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.764674   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.780189   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I0907 00:06:26.780655   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.781113   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.781131   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.781460   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.781638   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetState
	I0907 00:06:26.783194   29138 status.go:330] multinode-816061-m02 host status = "Running" (err=<nil>)
	I0907 00:06:26.783217   29138 host.go:66] Checking if "multinode-816061-m02" exists ...
	I0907 00:06:26.783558   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.783601   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.798015   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0907 00:06:26.798423   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.798908   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.798927   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.799221   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.799384   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetIP
	I0907 00:06:26.801837   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:06:26.802213   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:06:26.802243   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:06:26.802358   29138 host.go:66] Checking if "multinode-816061-m02" exists ...
	I0907 00:06:26.802734   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.802794   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.816961   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0907 00:06:26.817331   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.817859   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.817877   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.818182   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.818356   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .DriverName
	I0907 00:06:26.818548   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:06:26.818571   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHHostname
	I0907 00:06:26.821151   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:06:26.821537   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:bb", ip: ""} in network mk-multinode-816061: {Iface:virbr1 ExpiryTime:2023-09-07 01:04:42 +0000 UTC Type:0 Mac:52:54:00:72:a5:bb Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-816061-m02 Clientid:01:52:54:00:72:a5:bb}
	I0907 00:06:26.821579   29138 main.go:141] libmachine: (multinode-816061-m02) DBG | domain multinode-816061-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:72:a5:bb in network mk-multinode-816061
	I0907 00:06:26.821695   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHPort
	I0907 00:06:26.821868   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHKeyPath
	I0907 00:06:26.821996   29138 main.go:141] libmachine: (multinode-816061-m02) Calling .GetSSHUsername
	I0907 00:06:26.822164   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6470/.minikube/machines/multinode-816061-m02/id_rsa Username:docker}
	I0907 00:06:26.906572   29138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:06:26.921577   29138 status.go:257] multinode-816061-m02 status: &{Name:multinode-816061-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:26.921609   29138 status.go:255] checking status of multinode-816061-m03 ...
	I0907 00:06:26.921915   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0907 00:06:26.921966   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:26.936835   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0907 00:06:26.937224   29138 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:26.937685   29138 main.go:141] libmachine: Using API Version  1
	I0907 00:06:26.937715   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:26.938027   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:26.938227   29138 main.go:141] libmachine: (multinode-816061-m03) Calling .GetState
	I0907 00:06:26.939782   29138 status.go:330] multinode-816061-m03 host status = "Stopped" (err=<nil>)
	I0907 00:06:26.939800   29138 status.go:343] host is not running, skipping remaining checks
	I0907 00:06:26.939807   29138 status.go:257] multinode-816061-m03 status: &{Name:multinode-816061-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-816061 node start m03 --alsologtostderr: (33.491747662s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-816061 node delete m03: (1.217663374s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-816061 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0907 00:21:17.593398   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:21:24.848285   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:24:02.118192   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:24:20.641073   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:26:17.592816   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:26:24.846693   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-816061 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.797809842s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-816061 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-816061
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-816061-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-816061-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.009141ms)

                                                
                                                
-- stdout --
	* [multinode-816061-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-816061-m02' is duplicated with machine name 'multinode-816061-m02' in profile 'multinode-816061'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-816061-m03 --driver=kvm2  --container-runtime=crio
E0907 00:29:02.117669   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-816061-m03 --driver=kvm2  --container-runtime=crio: (48.736406201s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-816061
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-816061: exit status 80 (215.681699ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-816061
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-816061-m03 already exists in multinode-816061-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-816061-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-816061-m03: (1.00146939s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.05s)

                                                
                                    
x
+
TestScheduledStopUnix (116.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-825679 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-825679 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.945505444s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825679 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-825679 -n scheduled-stop-825679
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825679 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825679 -n scheduled-stop-825679
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825679
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825679
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-825679: exit status 7 (56.840156ms)

                                                
                                                
-- stdout --
	scheduled-stop-825679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825679 -n scheduled-stop-825679
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825679 -n scheduled-stop-825679: exit status 7 (62.735137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-825679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-825679
--- PASS: TestScheduledStopUnix (116.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (172.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.969538835s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-049830
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-049830: (2.113336976s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-049830 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-049830 status --format={{.Host}}: exit status 7 (70.135848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.41216191s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-049830 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (116.136777ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-049830] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-049830
	    minikube start -p kubernetes-upgrade-049830 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0498302 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-049830 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-049830 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.235174681s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-049830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-049830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-049830: (1.159934563s)
--- PASS: TestKubernetesUpgrade (172.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.846435ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-340842] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340842 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340842 --driver=kvm2  --container-runtime=crio: (1m45.932500964s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-340842 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.22s)

                                                
                                    
x
+
TestPause/serial/Start (69.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-294956 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-294956 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m9.085438628s)
--- PASS: TestPause/serial/Start (69.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.993208308s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-340842 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-340842 status -o json: exit status 2 (279.563265ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-340842","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-340842
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-340842: (1.164793652s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340842 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.830144267s)
--- PASS: TestNoKubernetes/serial/Start (33.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-340842 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-340842 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.409357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-340842
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-340842: (2.231537633s)
--- PASS: TestNoKubernetes/serial/Stop (2.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-340842 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-340842 --driver=kvm2  --container-runtime=crio: (53.552391214s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-340842 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-340842 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.591427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-965889 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-965889 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.931959ms)

                                                
                                                
-- stdout --
	* [false-965889] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:40:05.815367   41157 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:40:05.815834   41157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:40:05.815847   41157 out.go:309] Setting ErrFile to fd 2...
	I0907 00:40:05.815855   41157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:40:05.816343   41157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6470/.minikube/bin
	I0907 00:40:05.817280   41157 out.go:303] Setting JSON to false
	I0907 00:40:05.818519   41157 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4950,"bootTime":1694042256,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:40:05.818587   41157 start.go:138] virtualization: kvm guest
	I0907 00:40:05.820634   41157 out.go:177] * [false-965889] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:40:05.822193   41157 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:40:05.823638   41157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:40:05.822246   41157 notify.go:220] Checking for updates...
	I0907 00:40:05.825356   41157 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6470/kubeconfig
	I0907 00:40:05.826768   41157 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6470/.minikube
	I0907 00:40:05.828196   41157 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:40:05.829679   41157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:40:05.831469   41157 config.go:182] Loaded profile config "cert-expiration-386196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:40:05.831568   41157 config.go:182] Loaded profile config "cert-options-818054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0907 00:40:05.831641   41157 config.go:182] Loaded profile config "kubernetes-upgrade-049830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0907 00:40:05.831734   41157 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:40:05.869351   41157 out.go:177] * Using the kvm2 driver based on user configuration
	I0907 00:40:05.870984   41157 start.go:298] selected driver: kvm2
	I0907 00:40:05.871001   41157 start.go:902] validating driver "kvm2" against <nil>
	I0907 00:40:05.871011   41157 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:40:05.873157   41157 out.go:177] 
	W0907 00:40:05.874681   41157 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0907 00:40:05.876011   41157 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-965889 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.203:8443
name: cert-expiration-386196
contexts:
- context:
cluster: cert-expiration-386196
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-386196
name: cert-expiration-386196
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-386196
user:
client-certificate: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.crt
client-key: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-965889

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965889"

                                                
                                                
----------------------- debugLogs end: false-965889 [took: 2.531883924s] --------------------------------
helpers_test.go:175: Cleaning up "false-965889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-965889
--- PASS: TestNetworkPlugins/group/false (2.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-940806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0907 00:41:00.641403   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:41:17.592989   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:41:24.847070   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-940806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m12.218528866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-321164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-321164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m24.941694959s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-546209 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-546209 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m2.826984844s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-940806 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d6b57bdf-c5c5-42d9-b2bc-71a854896eca] Pending
helpers_test.go:344: "busybox" [d6b57bdf-c5c5-42d9-b2bc-71a854896eca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d6b57bdf-c5c5-42d9-b2bc-71a854896eca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.03407811s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-940806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-940806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-940806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-321164 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1b372d2-c5b6-4b03-8d63-ec34fa536243] Pending
helpers_test.go:344: "busybox" [b1b372d2-c5b6-4b03-8d63-ec34fa536243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1b372d2-c5b6-4b03-8d63-ec34fa536243] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.03013176s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-321164 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-546209 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [76f7d42f-7e32-4112-ae4e-053d2addea0e] Pending
helpers_test.go:344: "busybox" [76f7d42f-7e32-4112-ae4e-053d2addea0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [76f7d42f-7e32-4112-ae4e-053d2addea0e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.030283331s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-546209 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-321164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-321164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.136906843s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-321164 describe deploy/metrics-server -n kube-system
E0907 00:44:02.117539   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-546209 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-546209 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115693429s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-546209 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-690155
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-773466 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-773466 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m1.50697477s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5fd80493-eaa4-4576-b185-e4544930616c] Pending
helpers_test.go:344: "busybox" [5fd80493-eaa4-4576-b185-e4544930616c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5fd80493-eaa4-4576-b185-e4544930616c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.031694768s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (798.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-940806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-940806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m18.019954749s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-940806 -n old-k8s-version-940806
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (798.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-773466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-773466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067454837s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-773466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (611.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-321164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-321164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (10m11.39558487s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-321164 -n no-preload-321164
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (611.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (566.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-546209 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-546209 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (9m25.849233114s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-546209 -n embed-certs-546209
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (566.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (474.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-773466 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0907 00:48:45.163754   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:49:02.117273   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 00:51:17.593404   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 00:51:24.846551   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
E0907 00:54:02.117406   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-773466 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (7m54.069338942s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
E0907 00:56:24.846642   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (474.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-294457 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0907 01:11:17.592985   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
E0907 01:11:24.846291   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/functional-000295/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-294457 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m2.926384663s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-294457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-294457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.795094583s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-294457 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-294457 --alsologtostderr -v=3: (11.111977064s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-294457 -n newest-cni-294457
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-294457 -n newest-cni-294457: exit status 7 (62.868943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-294457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (54.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-294457 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-294457 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (53.957968541s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-294457 -n newest-cni-294457
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (54.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (109.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m49.205408858s)
--- PASS: TestNetworkPlugins/group/auto/Start (109.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0907 01:13:08.238755   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.244121   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.254712   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.275346   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.316198   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.397279   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.557692   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:08.878597   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:09.519432   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.108115475s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-294457 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-294457 --alsologtostderr -v=1
E0907 01:13:10.800458   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-294457 --alsologtostderr -v=1: (1.004938517s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-294457 -n newest-cni-294457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-294457 -n newest-cni-294457: exit status 2 (275.320171ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-294457 -n newest-cni-294457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-294457 -n newest-cni-294457: exit status 2 (272.193188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-294457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-294457 -n newest-cni-294457
E0907 01:13:13.360921   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-294457 -n newest-cni-294457
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0907 01:13:18.481144   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:28.721891   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:49.202770   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:13:49.779244   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:49.784532   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:49.794833   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:49.815126   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:49.855449   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:49.935820   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:50.096270   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:50.417359   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:51.058312   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:52.338467   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:13:54.899149   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:14:00.020291   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
E0907 01:14:02.117840   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/ingress-addon-legacy-474162/client.crt: no such file or directory
E0907 01:14:10.261032   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.83941433s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-965889 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fhd9f" [e293f796-bc9d-48a3-9dc8-eef9b99be6e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fhd9f" [e293f796-bc9d-48a3-9dc8-eef9b99be6e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.013854651s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dkttl" [b365e693-4da9-4a03-a263-1d21a52edd36] Running
E0907 01:14:20.642232   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025287309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-965889 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-82z94" [40849028-88ed-4fda-8700-10ed2ab99cc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-82z94" [40849028-88ed-4fda-8700-10ed2ab99cc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.012168278s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-773466 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-773466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466: exit status 2 (271.395483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466: exit status 2 (285.867007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-773466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773466 -n default-k8s-diff-port-773466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.001921382s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (123.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m3.925759014s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (123.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (128.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m8.356014306s)
--- PASS: TestNetworkPlugins/group/flannel/Start (128.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zrx62" [038faffc-7a91-4b99-b061-af89076d2fd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.031530658s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-965889 replace --force -f testdata/netcat-deployment.yaml
E0907 01:15:11.702301   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/no-preload-321164/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cjdwd" [49c0713e-8741-4d26-a05a-be117c303934] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cjdwd" [49c0713e-8741-4d26-a05a-be117c303934] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.014493239s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (123.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0907 01:15:47.216342   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.221634   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.231921   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.252205   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.292461   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.372784   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.533203   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:47.853402   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:48.493649   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:49.774737   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:52.084581   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/old-k8s-version-940806/client.crt: no such file or directory
E0907 01:15:52.335249   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
E0907 01:15:57.455735   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-965889 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m3.221970389s)
--- PASS: TestNetworkPlugins/group/bridge/Start (123.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-965889 replace --force -f testdata/netcat-deployment.yaml
E0907 01:16:07.696605   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xvxsn" [e5da11bc-1fed-4af8-9bbe-9821f753b781] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xvxsn" [e5da11bc-1fed-4af8-9bbe-9821f753b781] Running
E0907 01:16:17.593213   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/addons-503456/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.015231926s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-965889 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zbnp4" [7074190e-5cb9-4478-8c99-4f3acb3f6ce1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zbnp4" [7074190e-5cb9-4478-8c99-4f3acb3f6ce1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.011353986s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tznmh" [103cf94d-ce41-410f-a0d4-358d5ef3dd10] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023196414s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-965889 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2dpkb" [46f48712-005c-4aa9-a1cb-20152d3fdacf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 01:17:09.137564   13657 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/default-k8s-diff-port-773466/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2dpkb" [46f48712-005c-4aa9-a1cb-20152d3fdacf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.01108065s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-965889 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-965889 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f7cr2" [6222fcdb-2257-4018-9340-2e38569fe563] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f7cr2" [6222fcdb-2257-4018-9340-2e38569fe563] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.038212295s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-965889 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-965889 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (36/290)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
233 TestStartStop/group/disable-driver-mounts 0.13
247 TestNetworkPlugins/group/kubenet 2.72
255 TestNetworkPlugins/group/cilium 3.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-488051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-488051
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-965889 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.203:8443
name: cert-expiration-386196
contexts:
- context:
cluster: cert-expiration-386196
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-386196
name: cert-expiration-386196
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-386196
user:
client-certificate: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.crt
client-key: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-965889

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965889"

                                                
                                                
----------------------- debugLogs end: kubenet-965889 [took: 2.591015184s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-965889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-965889
--- SKIP: TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-965889 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-965889" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6470/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.203:8443
name: cert-expiration-386196
contexts:
- context:
cluster: cert-expiration-386196
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-386196
name: cert-expiration-386196
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-386196
user:
client-certificate: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.crt
client-key: /home/jenkins/minikube-integration/17174-6470/.minikube/profiles/cert-expiration-386196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-965889

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-965889" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965889"

                                                
                                                
----------------------- debugLogs end: cilium-965889 [took: 3.019472519s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-965889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-965889
--- SKIP: TestNetworkPlugins/group/cilium (3.16s)

                                                
                                    
Copied to clipboard